var/home/core/zuul-output/0000755000175000017500000000000015146252020014522 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015146276512015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000416401415146276323020271 0ustar corecore|ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9Gfԅ,Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5e% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLiJ0ww 2J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖Q":fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦSBZa;0RZ+ 9O5KiPc7CϏ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{ %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOOgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?xBVQXodՔz q[*ڔC"1Ȋ-R0ڱ}oF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?he:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓tcZ6"J| `:vߺMSƒ\J{WgdɱI398M3yΌNh$ ye'=%2VIyզ(e1V%ɖxw/,X܆2.<"*DCQj/B|Tؔ4ƪ(%CpBhRbT뢬6TqVܞ%~CTn0Dȗ̏V DZGe0VXFuy ]ojӂzm>ރ?VU̿Ǭd_a<2VuyuU!Gc~tiY菒.h4/4Oq !˻HC~3^}qrAX% @ d|O OINIES7,FƗ{F`Kc5 Vƞz`n,DÝ5 ;csu \eqeۑe,=2 /ڳ6V&5<VH}0,uЎX~Ik oYTU^0 Yʫr<%x|K,k]ay:DY1IYLD*[-0٩elHN6TUEП%uvY"1oYTMd~/RO% M%x;ކ?ZϧikvO>P~EE_P = s@#+tr;yd ŒLZ [Kxyi I٘nn|oj O+y4&O)A 2ax!Ft,M)D2[YV*zs=LȚ3sxGԩܙiwߘo|ߟa-ScY_,iU(Rl!:+N7XK -na y(RIiqTS9G npz+I蠻Gu%xExUYrȮ7gqrH8Bvwv5|܁aqu nxx|gy' 5v_ƍ9 M lk>!b9wemtP5#sJ\OP@:튗5)`y}!tqvvŷ\h[=U!A9y1KWE?X (\{EG蠵py?sh/e~<p?_",vJ*3"ݝwkv=x Uް$9 a? ۿaD6[y)G^O4Z((A4yi".U 5u]fmե&HyA<[ȋ+Z,Q:N֦I2숨+j_dA) @kcf8JOMIVt>^txpkkg<|g$Q(,AMާ’c(!D؜>sI!Jn5X.H=/`Ojmk lӍ^YU׸gk`SyY ڪ)VT.XMb1Sݯ T7M"*gUy@,gn5Qfq4UE&Tmn(ڪ%p<"a.)qVPYד2Qp[qȪo\y-ykAYeȃ(puo+r*pI{!cX,.߽jm;C.7>)ɺIZ]$ubJTDVPYM,+CuvBk ̰z`?8Pg%%R-j!<ɻ8 kc[ bfY3&*T|)EW16:.1U!R̩Aݩaҩ{ڦ%_sb>㿭fkw걯%Lk?OR"Ȑ2fvʦEFc2'ň<CVM*FɲD+Y`<)e0!vTiGG|m[+EӺ`?,ȵ=5& fڒ-kLKKywT-zxY] UͼA%ʱDDYٕwp^;U壽Z(ÃU>X8.er/i!kK!2g sڶn!w&^U}>Liy{1X_k~5MnL D'~QԿo+5ۓ@E50}`@ '3R+L$H j 7 ܷ^S[$ ݏ4=HK}Q}ԴEIMLjjgHMCb$WI-O'־Me]3~j! ŘZDSRE>suQ a4•Z·hOm 9MY"-n8xNm$\j#M:?JAaŔ2P> uu ދqikoF0RpMAՌ+ ݓ1d ۇh"f 5RĤgHS|ibt+䳇D6~k2ĤD'+ӻ:pj٧.O3ex h8A^M!8I:O7:'dcg](rl-N^y--}s;6cWkاԚ0^>꾡rkmX_eA?._Q8v;n/ERI!%;A%-{ۢM7mt=?:vϷ([;=wjYbza6gaŋ+;}>]O7U/:m9"hkvH OA/.y2"ߦ{b{]UV_$.J#8RN ?ғ߼(BW>xWq-gv(:՟urI?γ60Ow?Y EU#rl_M:XʞYIi؀~Er~ջR5[B_X:ʃI\V~E#pAmR9:˚\6ѭ 7yOb8K}y??=01v <v8uW 3U` mR B zu/<م<S]|źnc^G\҅h,L0:呇® PFT@M>m;@0غ>cKےDwRKk1/pXC4 `-.Lj=UiQ0=z.@_6񋭆Eu)LEOވߞ,j7єEn8!hw2mxgp@y3Z\Lg{r?gmc|+hxlMok"\EDW~Ǻ#zf /eB[f§]q> ).|-y gpa{aݳiHC} ͼ]oIP=KSZ~\tO@z@xs:'-6 #1;.w"HME&VDzaZ(B RV"[ y Kj!3$)]- )/b#\4ҵHn$t-2Q/EA׿cC3rRPrRpnC(M~Vp;j mrJ),>5Ѳ# /K%E2YuJ;{i5Q^% LV`2"E02.I>{UippQtqjZ$20,VM'fYK*!D}_TPH@pWd0g𼌋[+aKVSB]#T\K dd 2*)qq1rP̿{x/Ѥ?ܴa?avGwj cչ)jQR:V0ee+VXí9R1 (qbf3/R kNWς7: bGyH_!^K8ɰzF\ǯ>tX;Mm.Lk\8UT+j| iIysĸ\TZs j6-װO{3ִp49{CU("h]ԸǛu$ǁUhR(rŞ\Y #&AF[+dI,ND(eFe@#b^FaL;:pE90'EKfkuU''")d̉sT~ARLK`ފOo5=6 w~w1sAo7K sD`NkF ]#=3pJew>ƕcT#B˜N%hh⪸TSނ 3MmzyUj)6b'G N8%܋{Kyo^q/ҏiv=Nc:eXz=@YP^(߬p8q`FCKl]HzM QH)~ ݧGEX#}pEF~5MQ~簯DC({^DL3 rT*`1J2MDݶEJU3\!n'̀wQ]]bc@loŀ pի; Xl)xp>@-.O.&&-& +8d9RA\@j͸wCiLE 8L_YG8C~9=+KNI )R8/ߪn: C' 4/n=:2r]O OPJqV%_*>Lr|47#J ח' zo IN3TCè`+StwPp[Uo`ge b0`n(zd 9SnQďŨ'FJ/GQ]#9Խ,Øe҃gAO !lcz9f=fWWR yC\7kG3ā!~IǸFܩ{EevD׺ü.L`7v(Q3/^Wa;Zo 3^og`wk;#1X¾`}h|8y'? πG1& f3XN>ޗxg ""d57tWŪ/tDh,%,auI57,zW\iVhS"7&Mܲˈ^QaMbIq#pca@񪠅RDp$gaAK;EfkgQ8g| -v7DpH[za %~6 /Dk]xoxʺ) AŸRe>~(@rF?٣A`5Ob{N3OlF7Ub,̋<`q s6-q_a`~Y06Tvq^ՌQ=Ia 񪍔,jT@3F[YNo67!3,Ual}vb4W^$% 20ʫh75}oPw"p:@TV_NI6H`xv Pvl[+zN jD*@|p樻!{nJi"Y[a4ys.)t']A! Ӟs͆"fߑ3^PRQ\$h6F|OYO,Q~63f|pB6nTJL όoY 0@ 4J$,Q2^i@i6JvyO F51 ׈%_oZ_V\~ FL.VLi|R_tH]Fq5~&Bz0o5k@ƒ;1?\E6E}wgRyV`#Ƚ`Bnc(H FDPd+gZ4'Y!Ot*pNޓBvStZZuP%Ft>'3]FHʲH&˷ۆji ~'_Y|àJ.%nAw< Uhj$2g%WD{NR{!X'UςxƩi<~Yɡ6t V1/0v "(_c)c Q $Bn2u~%UM)l26l +T(b|s;HA9!ZCBCԓ~cg8s]T@J@cĹg )U)xUI7DoYRI{h%!{R2\=@G/4`$]#+Y4Yb*pvL*AU.%ό-f~x>cNsJC`Bک|ϡ}/Lq`ܸVt&3l)"K/E$]4IaϗjͰn-tvthՆ8\de q3\&-x xaeuZUu3.[_rǟ瓣FnwDJjQrz^ǨQdn=\ƕU鑫i n|2=U>VC*Ȫi#5KF'fEɶ U?<9#a-!=V:5Ms|?H^ ~0te+))k:TL`!ySH%3o0q#-a~Wv)pb JS©O@wDM)&} $v+r[$d4M-2yS3OR)ƈa+? xjYe+QQ%] +܀7!zeFˇ7 P|֧Xᖗ[3,hО BueU"p*v^Fѫ(ئw2Syǃ.`NTg!%G^=O.z@M%o=|?_~5㴓h9^E .Z-p2`{aǭ [#]S\N"/Ԕ{V 3'j,`#S`t|2q<#S)˧DL<ܔ}-g(ńUha{# sDA_+# f,ۨu% ,tʄ"y38HT! '^o̴Mz79.ykq tOa[HpMqoո<-XB #Fҹ%??mHpc/DN b3;Zuac`Yq#^ӨFn 7X}&|ȥ1[7IOXoa, j&;9%:SZN+K*&9iH!#М֢9x/#?ݒz0|Dz>Ңzp@[dF\-gQ:?\uE`$H걦W}\cęj=Y8ӋtfzSDZ Vk,$pkYΣFlK#F>|&q0œd042A&51oD*pZ!tze$ #0SbMV48`,8Y^RG!GYH2N[J7QCլYżp<:'%\kZi ԃksB[ͯc$u{Gֲ m,i{KJ%9c$' Z-9Ŵ~\MP>bcE"qpŌuJ L, zk${#ZfWW ޱa;=Ĭ/ ^%IbVQד*B|@"5Wr̘>ZrֳtRQm@fVWyiOw|^@Đ6e%7n0k%͍\ԱEՐC'լ',F_ #kNYQX,휥;̻S6\Y \Cw Ŵa_f:ˤ`Z#O,$ok{6FGw?'ۃB/Ӽ͟x:vNN0!LL-!+8 ĐfݬM|kB`srҪqW⇲MYNB <yaYT1̪QpzQ`ʹBML2Fҙx.ݑA}[ِZ |Ǟ[@߲` ..q8b$]n83|aJpk-Nj~];3 r,J8mP8pz)a6ǀ1CaQ[/Z8@CbҽFߺ,jH:S{I.Šp<m1חa ^sb_:-hHNc\f})LY*1al-#<%=EHK㓹 >Y,;/d e*yG"w&ؠjr(MϪSNx_<}8T-\K hAVd#H ǃEApTW$ q2Yѳ$-22Dx-gs8q %g5336,}[8 L|ܠ@yJfqS2jiκ@H:~?M3"3>}D͇oF-e+ JN4:`'H t7S~mE&rR/!tןL]Jgݝ1Bpn}+sNp1Χ7L48r=}܋蘗0ǭQ3"$# " N*r ΛVL44#6uIjKࡐV^D11.~w6KsŇSs$*r)LO;Wལ ZkyL0Fs(@D]A7H{9Ψ1iM#m91.Xm0z;8x Jts!q/3s[$Z7ƽNOV̥PWdpu.41~9(c|6Q)w#UL\2dKow!!%-'0pE{Oj`~At/S*KOC6jv:2z`Ȉug+ Uw:o~}n7$8.]]tcQm z4KN^uZI ^&Mܱ9h,fi-fP?Fq)?20i".!hAԔH] ٢l;SQa;]\ti乇 )y>4ޝom5'fpQ+Zx֢heIss0:5HH;O$\tAF '7*n^w'= &#@+ܒ)$/5N}[e/Lט<޴m-Н'52};b3ZU罄FŒ3/CD9̑T)w:WYQ )+f#JEH:严's!\s=v'];.*D^N6!o[íwb}ԃA{#-q81S熛bD≐?,wX~"bpm/0pq[^O(`9Mڛ(Xxs`,L I#*2VIIsx{Pө. ߜPFoG5OėB|aZ! W*Dt/G,7lPn꘵nV9*-g580| &Ep{Zo& 4d޲jV}'VC.ܶ|iq?،(nfz "ۭqJ:x&Jud*Aj4hAk$5>F'TKofE`! ȸ FҘ`2rVN0Č;iʪY;Y)wE's<8xڧc fuIlN72:]mQf >1 !@f~TEU~:B G,jvǴ!}~K'Imj]m1:jt\ꈼbܺ!؅k.,p0~h E>e*J*i ve%G{&#WFOV$;x+l fp\awIEeO(5O,G 7u]fm5NRfDL" yڂ]7 zWOu[3?.ƨl}RYKYa1)̠rtYs,5M(ҫRoS?Di-$2O1c{8RL.ف MHadEU4P4Uf_t"g0]m󛕞=s{T\]ŞaCѰW&hL%cp)K?^x h}3.gٵ|^U{3\\3x#<]G"948\0`y{RtM*`??U58:)2/< &>y]`@WHݠ('?g?쟜D' |c|ʾ`/ʓg}'3mlǗ br lh:W ?E~`d{Qu9V \5ޏV~#Gt`rwdEb^:6s"Deo|d%%[`?7~1{qah&GP4 jD$$ڎG3Zq2_f_J4NzKj3D$,YZ+ށ5 J +-3=Fv:GrxtYOKЫ4dyh J3f,kv׬Q/|{2g&FW"wG<}nܺ4ǎ&j_y:EՇd[l C ҔS7o@"H,i7T*9KSp*xTNqKp2`friR]2-s)2gv̮wٌOE~2|~Mq#XRv0#.xgΗػZˍ95pIx[QLfC{3kT*7NgXckq[Jr.') .f˙X~ҵ%g;WZ$ј،TYp}u/ )_tsQ%UUhs3߸na-⤾d7\aNEi~F=~ B EoMwŜ-˜Ip-3 bԙY$v&sԊ JNuy#0#+!GẂ5W8{k[gM8^0i G֣˜tHUDywܤx"Rd@|w( KZMa)߳Xs*I ńv*`QtGXNHcK4.X䞭G%xwA7ʺd)Hz4puaBwAhm{ [f4"ԅl/CB^[pe;A;{NITuH!Dohh$ >Jwj)ItV"%Bܪ Ɨj5&xS^K1dcKYm (%I!WxnO~k-G)FezʪB ^'i>s;!VzAxX wP6Dјi9>-z׿ O `}zy ŨŪvyBjk+9Z)nMz?i񪧷c뚊/-hK|8x@2+ϨC"Rr+59Z49x5;g:^cJ s/E\ q{ʊ5 ϊ 9L9"DuD`-N28uh9ʅKwoQ۹R͝ h(;"19ё. f@nxS~8Ɗ#hϵ 99BP|6,ߜOX3R:s r ˠVZ^b% 5~'W" L/hWV[ Vb-VV x Bx )vYhѣOg^4·{q$v6d {vǮߊ\gvoG-S{Whžvlv^spFQ/_؂Vi@Ӓ#_ktldNdw%ǵgZ?o_?__>*ץ2^pw\ o~=ʧ㲺/:Gdr5kVF$hxTaA03ODa .H 4^UG뫍P M;wg3|Sp\TM>co:nw@2T:W[2ŷdܳ+ԄkBbH FERR HXQKFӎIF F c)Q{d|]!qC` VƓ `|VKXM.4 xӎx.D]2֬ፀ#:\2j003Zr$A975?8"$^r0`^u[3wf?Q/K8/@`@Ĕ*  1|6$RZiGYe0oTi}ҏIPt&e+IHhGҏIByIhNvY_ Z)ZJ I2˅)*vEqzmj L`Y4Zr2d&( W?.Sqh{O #f3 lyj&jG WW *ꕳ(38B䈰 >}DF'Wf"w޸إ;1G"̈́`Z.~% ̎\폣Uُo~?6xQ$;J] }ۋzv[/n]^y{l=F冂^a?`,YD/X txߌM4!*N%vllDmJ po4eR"!ޤlSPj}-F{E2G?VI6drѣlEpqVg#?rIVJ,:F#rV?ՙ ,-5)Zg?U&./Z,3N(M=TrAG?H_./@Ӛ}c\Oxty:4!kҷ0DJ߈f--$Ԕ%Ϛ/ea鵈w9pȸ\GaO 9W޻%&4˺Wj4Zn P A| tr8 9 χEP"G9\l %\Kba*jgdyL:fH)(RJ-GYwHnweT-ߣ6(ev@rci"BpAN9Pa\f!6] &]!ɜZ8(VGss 9kk)s#WHx$v2aۥݎG_R܎\FJpj< р<c D9RzZ& n6Yݸ-Q"E!jThD5"0Rc'5 4tچNi͙T">=&H[+Q) I8KxV[w=nv3b{v8y#fsFk8!'|WMR )JM.G_WWUWUWWE29J>1ps`9qD#OkfXXD\ Xh XREQfH>M>\b-N''$a$J@ahƄ;RI"02 GEw4>A} PCJ0L!,CpH%$Qv N)q xf!핣to];0GtH 5pnQ@`awUi K%<\s  E !c Ř$% Ij͆Vd@JkM惲9!c+ &WR%~AP>(6$8aZxiAF/Rvpbrw찿St>[lStV+g˫c4s洟_ 021ԧ5SF4ߙw|`$[0T#5E̺YFbӑ?"[ K:QFg \#3FgB5OԖ M8&Ro oִ V oI@HLI `[d &%,!Mr1xpgt ޻h#nVqCѷvp7-i4 rĆ%j!qR=N˝jdc"O1\Nʯ>Mڿ}Yn}%:eׅE?,_9i֕aW-4 4{e;5[c$ >/[.Wf7ߌ>H3R,5o]4U6(%"4r1_K v͇ 6bWY)Pt@, fpI0mjZ =8 2/fHVpl+.w??H+&n X|b\M+k1D)Dn(?|ɟ`|0 F1Q!ZVl8p5 PLgBHNb1>,"}}y, tJؔkϸDmVU&ʷxWltfg-|u}mBUMWmHi/jhvOL`N%kz9C÷fd.;L XjXUL'>],$_yW Tg2-l]"e^ޕ5yѕ7Ύdo>?ѷӅ|v=[]&~ "ίgTM&G=/3c]) (B,8p7LvE@-|ܨg6&s!@VT` Z_M10 G`ŤzeYMwB#Oի.d#L3)z1նI 檨E'}m!ۃ+x//IE|4yҧ$޺ !kU_ nҼy lfIMqRX) Z]r> `^uABous9u*WJ"zqʉ={ p[K J?{ UHZ#a"vo\7TJ֚Õ!*?8$#hrk |9I|+)㤘O/S|Hs-)1v1czr/6_smry'h:v p;ޢm`IV{ܓҷ]Z7!܂mD:ko < Yzh› _M[lՌ| hj9LOFIAiĞp^:8fG'1W\:_aqƶ8ck$>3v_ű'piX]gsI">bUtG-Ò;}-ԗFGj́oo1|MQ@U9A!6>f,b|aQENW(-29\4#xVL^G^y eziNա=Tj$/K%rA Zk64.A)?1֠n;-"0{TlYKV!NP9bUl}WrbLE8r1-ŊHce5vI2 D_~0%r@ޔ,`)V 3-*:nP詺=Voݰf4 ;8WwaUsZ?O7v9o0D{n|YJ[giX|ۄTlg^ cV˫ÿcM|=˽ỜWNy`4k{zT>pW fMWe0JnY͖wY!He?Ҙ.9^1^qT!aZlUDL1ˁ;CGoێYJhCYu!kq`R!$`usgrww],lMr. g(݁UV g7n@g "v)|WLW4(CZpqT墀4Uy& I-S. {U֜+AH2Jxg)Ұwac9Q;Q d.f5!Ŝ<<-$Yzhw+ גlg Ұ>@;ʜntaAOVdAnMӼTl22ar[mRٴbصTݭҒ"ZLtSo0ʟ:*[33cS뛇ga̔>شG_>l\ᶵ*^)#mZHj>֛[%LU:~_5PLPn؀ hWYWV3/nle˨ ַq*Ҥ~ Oil&NɮZϫ}].V:v|RZ;!tPy skY1~{Y.~ǁ"/Ql4%y-Fe h!?1gzAJkxNZ[ ĸ܇܅ DeSE)&p_IOa:&ӹÌ:[\͏5S4\ɓ߆ٯ?GO d>b2Yz;) > ~=6rܻeqxedIn*e5C#˸/3sHڷvy2`Q2fYnI˧E!kbG?uNk-LVټ`]> rԓzXn{WN6 Lp=K{ciN4=x=,ҵ4 P2 ۿ`xgE(zBRYW .z_~&1spmyi&[ZY}A{)ےY+y=3r3#k_w{E7#eA<$%u,zD!q؀0\3i#n6p!W7J Ni1bi'湱eht,>b>?cؾlQ붠u9c!8~.Eq *r!qLDHtaSg>~(-$b2 퇖*Lzo]ȣ'ri~7/0ִp$M[c7l#- r.1@_)SlX.)8 ? xhHtXd"kOh {ٌb[ 1Lk*R Jiょˠ4Cw܆p>Ltan WU­jPf(_伏(( ^ؠQF|p D \G=1CRlnLVxOYxe}8+U 4L* 1b8ۃÁ܋5߉g~w>+Gk8iXO#+GN\@']qCnYlIP&͒6eRfMsw6ŀ0ibgg~ƀ24*SrL^P(dȒ9F.EUQcBo5~ 6}mqkIn- vФդV&rnQ±}$oH!S9>h4~6yC"J"VɗM[^qQrT2 DTSeƽ+pZ|05 ehT4ĉZyQ.F^f52FQĬE05 {_ xzX 9{`c-M'1 J\FqHEHMz<9OU-#(PRM?$=[.! \J5~jb66<ι;Ig-o@7 5X4PtDoXP T+Ra*?ejaRcB] l )%j!yrδ&cr׮$!ӷ 𩬦FVrWX]aB'w~P4ШUA[q]gg]\mf,FZvdG,"YX Nn5[ d>ceG.4.S;)u1^i+zD$Qs,K|0i75}f21KtVCjJ{F)CԸ6gHL,B D<7]@Y+ .F5fnNVDl}LjO_(01i)QF`p LTr9՘Z|Rp[kD&tbHZqiOd%7>g%F[3PJU=  cD"g<2Ƙ8a :5F>΄HԭΕTѾճJmL#NX@.6(E8C[AӘ,03߁h7Ǡw < ns139M ɐ҇4 tjP O Tn S頂$8D:MnTQ>9!O fM*j{%/W1tƞׯI&kTHm52roq)n],w.NO_6W7.7*nT{> 9]ftod1(aq)db燯__#ȟШ؁L_[z-2 JEC2 pR%Aj1nFg  D6'#߼^piE "|_Qg47]u  ՑY䰠6H' ]ߛC>9&`xWi(dTſ"M{vKH24kLUYpJKFb4mB߬d"4 IÑxM?.|!L m"?qtPOg7h*7gi9{!z)sq9j~}:bШxDs : ,Gvm[6AK*f+gsp1z2y ]f %9'?/[vnaWǻ9Ve2f *K^$B% 7FYne1? ^OT-3/nCw&/=/ ,>mTp>ЬChMÉ =sqOOU F3gU>82':Jbr<|pWv,ʀgCޢ0/tbKBٵ,4M tGpZ7}H}Na ȳ]0]=!/u˂4nxkW48젊w_%T WͮÄϗA$ K /{3 y+C[Dv{nZ2SO;{X;LV8;ٿfmOxI.S2f~rz@\S0EGO$q)8o_K'ugſċ ideKN oF8d?c_ً Sx}Y<b0E^&e{?:} U,ϋX-7K\:hn&R(fWw1 C02g#;Tؔ{ ABIZi6Aknj!{ikWnt+CGܨdikgx$ZfE!߶4x%8]wPb-@1^ tqpxŷ(.Hm0ls7DRǛ]sρ c} GaJו3Ws0-Gox'?D8#?$Ule%m*%&OH? i>SWpH);V;VY3ofl=t];04o(7|t##\/J|Pٲg *`*OP* Qf|( 1ҿy-؏l|J -aQzhx4,2;r AºQ}!8,Oe9#9*˹PLz族nm,GM$#C[A&HA!WqOY], 1*y@MR%UQQP ^'@@viߜoz/zx˷bDth9bcptuv:,ګ}6Fu97C,1zƖcTˇQ.{n Xsn#$D! GqT87vp @SYN盓-//c_ -ފRAޮߐKUqi$JVDh .*H;B>kZ > 1w6?;l+"}5w6wQ_/_ht#"$csh 0Zl9?/(k{ٱ̀0,ժ)B@I mοtƄ͘Zr3NZ "6E@+NoMCÃ>Q6s:Q'ѱcrDsj%Q(] n&cGYyv8 >\\SuШwU߳q'fo$x#}ȈHlHt؞Fop mTKS}&@jon|2L 8^ ytx'r>k^g%ٵ- & T/Iqcg?ڛ·e\zS24j6Ei@ʥB 'r? #v&O#()K2I(C8|* .~>3 L|"9)WBY98AU+.Z $ά񪯞X \~m0,8٘&ZP 6n@c9ǀ0,W@{{nr7qr7N{hbfȎn,i$!P]n܈d"%^~.Wԛ#D78ܑ5i%Hɜ0+D9pNNpsnNng$nàfHkYO.M;­/31 ~YmQIw}ŕ0:c oJmzKMBdqe5r A.&ժ1!(COp I,r{7N~ᝁ*34`O,gjƥsp国RdKi;"q]w3w ѯ_??赍Jj,N HF$A:ϧh^[Tr׿gqWW '9O==>lKeQ$/2"Y*eXz(xfD1( {'5<$Ϩ|{y.:)$MVf|X޶b:r.% Y8SilqhZ +Wׁ=)Q? $a`ٰ&P#5s+jmQpqXav4b}!ޓb6,}z4%){6lJT9(r{gJ-3-TOޓ C/hI|oY`/RcZq_dޗf#yQⳃ["h1@A$N2tiOפb!닊镈K3&n20^ ML*Y" >\?{n5@}I'U)?Wj$EE6m/a`/Svѱt m1݀n$9i \5H`x(>d[1&u׀bMYL,Vͺ׀y_.>݆ޑExr]lm鶈5 ycXF@FxlV F$qNJguC=cC%J $'t@'˫\HFhaXZe4*/9 ~V? TQ!ש;a'![=>P DŽ8%"vR:4Iӻ_4 _^gz>tHc5ml1΍:84 ǔu@Aqq4m^}cѴ#H`m/\J-LsE% 'U"GT%ZР{̻ƙ3ޗO.PhiҼYDE"xSIka%Ɠ'L0(|&!(=DŽd2R#^t47eĀZpKZ]3)kvJOg0hn Qpޓ!y| .RkMpH]h/(T\ r 6r mp[{=|&PLDNlь L#IS'Y­ү}V|p!.j]8XI n1 AB/{ƈaR[ ?Jx_ꊌsnAȈӝI1֡D 4?Er-K$VD+vP\ &\GCLq z\6@~w+X-?exDŽK(|JZGfa,h7B(VETd[U3//uisG  8͔$ʠ}D4ˌI*5k@~WIZx]׾CizF@Ju=E-k<Ml~'S"c‡{zͪV*4J%<*61XT%nC\=M]:[v"mu 7LNLpr 89}f /w)Î`L,$[)W".HAx2SaY/<^VfH}ZQSpRA. pP g-alxdG?%*,6ϳrߗ,?Qcͭnw{e*Vanka a QTRA ǣ4v8- |+.|ɽ=?{1*Woo趵b5h{S ɕ)1b,&żH1rP sp/(n)lч7nB}?Of?126St B5Ne`4S/qҮGscM,zӗl:[/Ƿ8E#ֳTKOyO]Qv+})+B|9:Rl `E-A1Bڗ:N8$Ӷ=|7٭ZۻoW}xFk^$-bJX[Xn;41b_37{<¿*'PSw9H4E#nߨ.V١Ȇ-^4AT _rIɲF}ޡr(J "@g#GN`(h`7k"Wqռ<ˇNcy?ޙbGڣ^5b%>cnGE.C=(C󀘴DKICZ0G\͖Rt7$/!5>VjAv풎a?WKNto͔|*I`0,>(`tU 4*}a4NYu&OW~ FAsW ɰa"w)⋠=zxjRT|McHR]e J&ӡ[PzA_G Sr妋b (k10q.G@7$a^QC'm.%ashxUU7u= S-8&^=/]z.zʬu9 J) NpQ{c`gڴ8=@sϳE{աi*RjRqg%ԧ62(Pȼɩ `l v#4X RՆd`0e#wn}~}\o-F*FvĈTHhEjՆ9'8վfk~稢\_ B>\p{t贡\:~|r\dil1,q$|GxAJjds6un"?ENhhX`H569!99_ԗI=y ax́_c.Z5vٹA74̽4u/[wY_"ixF@@r [y={Prho<_>y5Q K-S-f-:]W?;DkŢ٬ O"Q1+W`jpWHcovHlZ76E/'- )xTP5 =>QzL>RտK  65#*`J|tF 5XxQ Jv%(x4@r?bYNp?[|`rPpȻYy9sߋG0^ i*c;϶.>%!%MmKfaH|OMC֊0QںA> 9 nNCc^2c\ɝg#9~DE՟;D^.<$moWU~j^X(bFyEbz:PsҼu$RmU4jAZ4Yt s[p L读Bki-G$8{̡!HA=:,:P%0!±CpŶ3ʨw Ac[vgÑq 6](j3]8Pk Y7T/b,@S[+΀~cgG`BJz 'ABjEf^- a)q5LaǏ~0:mIK+W :kWZVW|[Y= +yn׎3I -)k 2u(\g-;l~Fn]vZ9 ^`pȮ 1**{N kk+#2]whV$حm>]ЅyompX^~xnnw֟Y$U$*')BFDY썲ܖZȊ|' VvQˬGr hoi}𝼫`[ZmcT\C ~g9yW8Nq06Fb 3@Wwoe[Q J!8ZTp3V _ ;u +Xmc+HHF1 I>xN.06hMhΤPrI">ZrܚhQFշ;*裣6 hQ8DFCÿgWW09YEn['锂Û U=|Ĝ]%Q'@bSg ZAgPl2{!}UB9`vjcޠp-RV٣R/1yssd|iٓw1",HґTa)qɰjS©|zGv@6Ff|rP6GEEV Pm !a'Op7 hQ8jLN) *f ]_u䬁70،ǔ` EĊx(qAXn /WYA\|'W*z#mq#AӤjrnx*5/k[m`\߅ɪj{'41EfaEm hDR߬ ?f&o^>x@|k Pr?.otX(qnM,ٿfejc,3G> j~]՘Iehx4GjO`>uU(:]p5Ǚ1T)-bSi9ͼd~N 娳J83,'&SY3j0_N", ޷xx \ SY}̗exLͭOߖLged2 .Ӥpܛ"q:2I6E8|rE}pnvѿ(7K㦦CۄQg7++i \{{Lo6r5w B 榑+_ګl&1YQ[nn餞rXglb}~e\lY\-ӕU\mԝ;NdÚ*_>Ёj[S^4 rX$us٥Qzi LT ,e(Y L&* }_- }ٌV+uh, y/i?%eFEhYvlz[6 ,|6SD5P:59 ^ZLe&UܘqVU;[&|br೵/PZ_CWgpu4`2(w7TЃkq jh.j`)q9rC,c`9vs4FkuZDuZ p,FaU ?45)&рg1PX%L9mRPzTLn4$8ʈ8$0βT'87BꮀGnZ^+DE`Le2Γnqi%^&_.|b9ςsTDlQ_\\TRdU8(xWj)&ct~ V)>=e98:!EP +}zDuyk9gT,fܥ[mq25"zm#7:0Z ub"XʀRA6BSRZia*QZ0iM}`n32yջdV5Uʚ_sJ{%,䢷m\TLNA ~ʴ)mU'`X>lCEX`^?5)V4ӗ"օSY2@NsIfɤW :i+ b_&Jde0.On^yaI'B'?rIk?ea!#tWa5 d =| xXuֽo͟w2Gx^Hj񇥓0 ,QU@Hr N{{Y;l2川3'fs'`[/fe*._G0' ٯ@'c-UqJ$8RFq[ Omy欏^HOC&b|X:CV %=fx.eY(^v"J#(x*Yy&_}#%X|/ Rsگ '[jאDRn1 KQ{NDz_3NjcsԆ{!MÍ(q]fejͥ=l|/`X'hM p҉sXZm~nխ)M]s{~2Y4-Wo}f˜~tQ㩨ɗ\bsOqW If4vNHdrEx A2 heQFer/RJ9q4r)r#=?-Wh nyB/Z!D0Mh/f˅s^a?.|xuʙb--V%&oVsD lƤ%!eFpMoεs@qBsl9IP}Aڒf ĚuND O40H"+haҌ[$v70|/@a GwCcEP[E=t'/kso>s[,P44`aunrɸ icB[EDeQfBŧڇ:b65}q2UR"Ʉ`*ÙwF[U( >4yF:F ;Cj5H(8b,$@&ᅻ!RD[rc 2AH0c݌e#p/p6º_?caAR IRy\$5!L˿J ?bv:2W;:pӼUn0$_',5[`DuQU3u; F\w88XyN0Lz%QLm$|VU&AoNX{fy=aEJU-W$7A` V}\/櫮=vN̮c,[w Wq47UC/[ 6$0".uG0Er.JA)&.K?_%y$6!ȸH*J1 }۟VS~Z{~bq 3nLEEMuTTXKA*.'Ee=?"Uoa} a}k[emCl+z! L  ~ bFz G{cM6#iwMq?6t(xs-19%o]r \Κw0:DZ>cUu=!67bJoc+d9 XkhDi AI Ɖy& 0nj|~W4XK,%&s Yc~Uہ~y3efAu9A\`%Ž{/%ʜW#%K1(,X _.O$|ՀNFP']5ݼJY<`{͹ޙ[St󶒸u%T@բD]uؚtH̱Gw=G <#'?L,S6.Jka̘L}c8OFDbO?ɹ;X-O=>cD'g#r}`n#V+LFہ5UQgG| FS'ȊQ׹P08\>h| Rj*??:RioSڠ^6rqnCr`lLM&Uƈ3=K=b}7C.4JY-8LNg([6xϱl/ H K]Q4i|HMaX N\zg-Օׇ"{?Xa3QP=#y>1"o" QMqGd.#8L0,K|j51ƹʌTw1RL5.Ii`EMu,?1}~Ïc2Te;-e؁XA%t"K?tY+7'6h<ẤZ(ڒj1}=:s {\Q+~xR1,Cwl9uߟirbC8`qw)}؇5)j,քXBHad[y :1 J<-#?{@9 z}}_{}]N9$^N􋕞ꨖDǾ;F7.~#QYKd8if=OjjsNĴ|>t&mCQ,-Ezk8 _hpۋhЃo8 FCé4Qg\GkC8J#Jю2NrX-RJ)rJi.,1w6#_do@pEUkbrmgp:MdL{a~x t>¤~aYuRD1=,l@1!(N*U͢2Nʍ Nf><$8k5:#k+p8RѨi}sBTЃO90/>tt!~ԸpwNmL8fOL ';YlU)S(UX CWi(<.$k-!VKWmg( Yx %T&@O}  ''z)rm@zz(k&#/@=b;K"jG3q'BLE39aާ P,p(J.nh!Rʨ "8q)x) eYD5((8V?1~0 _*\%pڍ |BϭXyXDU?J0{Ek'/ 33pp]{AފǴjA}|5*YSOM1,ŀASy3d  $Ό,fJjh^G pZR+#<\0o% xC`wZs Lp)cQۼ}t,aЃ~9 , }F[BB?{m0ȣ2 l%qE[$KtifI̠1(ṟH v) Mp0LD`@nЦJj=YeܺlQr&wc*1.(D.Rc 8ᖥ1?Ui=ze{Rrc\mJ]jpb*@)rXy"IbX$fVɠ$ Nh+ 罷zaX|zoo BmSSO)_y+OP8 w'ZHoow2ܡ>e'g6}B]ZS3:Թɍ9-_bJ{]֪}4 .D5!kMĒ\P¿*gX{+^E-(uڵ6d aTj29ކ)j⛪p.RT6LG5KW̓]6=RשG-Xږ~6%k&=ov:.?v\tj{ȳz,VӍ{l;x-htrZ-erö;:Fgn-moN\MNh0[TYHQL RjZȁfrܼ%*?{=z֘A)1gph~(+ X.-Ǽ)3z7>Y uܶn_Ap&A󥪚##<5*1N)Rneg,ҀgH G>[{Ul,<=Kx_?!u9}J=8s:*v?}nK1vo7Z7W\利e``cMnhTvRjytu^ 'mXO*z )8%qb(&.gԚi@0ƿ7=l",A+lΠHu(Wَ7/J_gJ*&*oRc ֐81#L`hŢwWX.)I)#%As >~ 5-G.Ukfڔ9fd&{ j^~OJ6`& 8$!`k!&(*iK親 m"ڗh6u>S|U{M<&_X2gUx$x5^.dՖ|t Ig{Bak/f_rU>Ku{BĞTzx(^)FQ),WFˉ諬s`݄مLO.Jﷀ%3dv?m$UY==໡\%{Ǿy=_!_7iܠoO!/8:_YPqRv*vRL<auf djvw7&m ίXU0MKm}[ٟk"`"w!泹>iK d,f%|m҆bWe8 BTܢB?뇏~{sA&\6j؁(R&&HGנe&-5#pZ~ў{̼ AE[|*s  p mo+H`<`pϩ/^7F˾ʳ:l'{dԯRR—K3oQa l8f(iEH-HBV!*%"Pv*}m>/G*Ze 8b%RFH(wQUFap{f?23fE7P`]~+d\2d%[삍gfr:װA(5i6 *d$̰ ԌN_e1j݅dX[h>~2MFGR [B MG-jPJ{8t81qRșJ#U洒^ rau4ڨ<O"y;9f/ $le9ZhG5HnB08_Ƴ>'馹{d&ȆR\eq4R7'H`1;C߃xB܅AHn̪?0|=9Kpvຽ< -4 ׇܑxpP˙q+dR+4̄Ɉ b949$UE2ɰPbofcYjd~52y̙5CâhǤ)u==a\zorYRR8:*"FUEM\ =8# acoQ1FZ*3 0kuLŁ5}AFSd҆e*0 ܵ1H8­i:qҽXsA a2^vRa@' YI+KԠ$ C.g u7!BtuXP`An7- s؄X Őw rMdVY=h€H< "pO*;gqxh2xOVP <ҿ ?t% kQ/ U:o0]"onQA:ψmἅq!ͮlՖm3gᗧGҪB)HGBQp M}?"X;Toh Mwai m,x^:E0,.nwʂ#9 8"du bm$"jC08.XTeiQF<ncs }kpZGe04V ^WQ6Wfr `(ɠ =,/ĉ ҃Os96P*%Pd3<BxلNwકWZl\//#"DfFapzHMs4pQsDVÐ{ȍ)=Y͜_\FVmA7_,w0IHS3ˆd@51"@`o(jQv{{n:j8{\,z( =Ja=. (7}!q : 0p j_( NJѮ&3\QJ(3qsr<-4 cR)˝9Ie$Y?Zh KB"Mfҍ ATh(p0 a gDkg;ikK'o;,^ H҃l7V6܅Ub[]Uw\ͩoB08}-uGB4(8R:0p^RW2Fapz켃+#mdJH0@B<Ip0RPiG`[D破.0A_L;!jQڿsBwI @ă˭ *2zBQֿPzq( UoH5QӣK{Q;t#zC̽CsR0)Xns RU =4? v:K ظH]¡Y1j<n:J-4ʂ#UM$`8C Wc9iOk p&KX%-"sUuro( 럛ҚdHgcɍsW( WQiACW׾`b*OB08HV%9TPc?=,;ųZhGo\4b.}+R,{(|ԕIpX7PFYp f]d_lQBΕpt.CsY!z%LBE bjcB@K֢KQ4$T hh0i`fL9G-k~4omm9U/ 'B6&`0=a>PJUIFp.M] v l1L# h{gJMք%bo'FipzPoҟ@vk<woںIX" 'Os[W{~kJFmk>Y}9+#M 쐳"ᯓvܤq&]4ӟq9a'!`=ͨ<uuKmx5e 1!̈́m_>Q0x_Ү]M6tټeN7J)&{9|ϟdv_vF8loݠvjHSlû?oh%H#ǐ߬ޡ͋U$1NXK3Y$ ~L/5 ߦ[TwLBbNEeLDF%碢"3HS,4v"-Yztϻs,|&Qq</O|<]M&'0!ZMh;]Lgr}XWlҳC,ˬ?h{Md7] qI޶'_zS~O};[֧Kߠ%l~N֋jX_??..VӍ\|\l` lQ TD͕bw[{x\0/ ,v2jTYh#Tv22d&%E=\㷻ODe;Ż_aaۺ/߽}K;8{ W ~#x&pgj>:]ms۶+isO(<:iNӞ~h}= $:%Ug$J&%ڤx-xY.],vO,ӭ~@}{&1µ9FSKsUj)GL@2 G.Q |g1(%3xg #;w 4}iPf(ōPx|8N xvG2iq҄ʍwtS`D_Tv77e qReEXǨ ͒6P,IX(l^/P4_2_yƽ0LW$W 썶r__񣢁=zd?՞ \Y@G ;.l}D17@O9h\ը!ͨC1o#Db G I:#Ӏ4e`0찺˿5jW+)Aa5ֽW, öG/`ۯ rFkҙ: e ׹]FNaWy? SѦWB˵ND:R6kYTsiwB/deVyچM3YJSqKNr!ue*ffi {C8p]mMm$Rtk %Wc8%ם֊Vo ^T8gѯ9p8>0xq1=F D>/M^>OM `&iy23|ʻ7cym(JV-QV g`AcGzg?:Yggw+s?|}v^D?B?G' Ld j>{*Nje#ONYz q1(bgL|nXH hN$bjz>`mb`G*.[_&qbmH%) i+|g Мt^l?Ltbs|x9z""_˿,{rO ^125NU9psCޕ4#v #3%=QE[pΥEh:s~l?jx~ph $凷{:|i9ŚU9CCdK m7(rGvx%wi` S_Tͺ3rv5%S &qD?\0{Չ3 L҉[Omĭ)ƄXRu|@(Cm̩|:*̡7_T~pZuI Ʋ i8vmZӶr<6rSPk+zEڞ颩 mPYނ fv(ڸ2Nٶ7Oo ©NSBSi0s `{[+J3AS.ER a Y & Drڑ}{{ ;Qٵ?e7$غ9X ںCj(lMn B7[bR•<1}0ɼf~pSH4MPŔBZ ܙ RJ;Ӹh\H.![5cXIEXcGSا fRɌf'L 1:in{Wkf]BW+`bl9jlBWw T:8OdlXMGL2Ay#Č,V,%Ui0D=\cσh?.Gnr&yqPTT]e@FY X)cy ;yC;yI8l4LEThnsRƌ!V8Mg[A1 iED.@vI8uj$у*;<_cQ`vt͆f/F;g wOl/P) v0COFvr->wq,y 0wه]Y֝dql16fSNRH* IA[?Sz Nۄ 5>]"I$J14U,ȴs3G3 JIA]@\? :=pfԼ'ms`T; ?0ҒlUbH).vdn9zO0zCf3΅,6>\hDՄڇɶo[QDu FاZcl "P\;AQ^G;||81ν͍_D8`s{u}y5Xl̆J)c6Xf\ f)6S`8WHH34&Nr$l'c%uyu$*a'.0W|]wCaֹ--!$w- ]f+e -|QTxq|WFǹO#@bq>ȣ.SM0jΓa1)ru&,:` nQ N4rE4I/ƛc=:.1')- U$9P%{M;3]/5M%]aj؅v]aj؅v]Ȑj؅vC P.԰ 5B P.԰ 5B P.԰ 5B P.԰ 5B P.԰ 5B P.԰ 5B P.԰ 5B Ǥ2YP.\ 5B %԰ 5B / q0dH_PH_ׅu!}]H_o[P-|899}s_o8oTQAS'5Vg+I4.RkbB[͈QFejkGrrt@> $7 >$7 CrÐ0$7 Crûs 74Afc/.&"܃N_%ymgCy1r"e!=uν%_?/mF#C %Jr!Ib,ΜT:xVrTM |o^ۺd4Rx<o=rO ֣[lEyƘab6q&M2$feFJUi aa'+;|FIˮ_@dŜdG,a+ A=⑔Iz0>xwLܱLq5w@:O,%X{J>^L9,c:ǁQ,A SpMCqR̙ %N3M,f 'ypz_~~3XZ-L9YmEE*?|qסs/{Dv 90;!{kka X8)X;j?pAq(HvSn@ )z Zڮf =D'[׌;GEfLw鷈؝~ Յ]W J&o5ѷsl;2qR,K:҄63YpB}f>Ej8x?v;[J$26N0Iic_7VXYAI4MuYў X.Ǐ#D>{Ndma#t/h?Azb'$5O踙E=a Ez6oo+{}ߞ07 uYcw'Dz-VMRTtr[-_']mG6Z mj6y3~׉VTғ(Ƥ-n<z8_ĩ8VLo8٩嵙BQG֬[Y;~K,NNt tqހ(?,[6M$ u$'V;Xn}xJD֦=U Aj9~ X|w# m'Vn&])6MF Jjb)|m)Z6 ڠM'OZ dM,sG߇X!AN8d'շKJ#mtƔ%R got8K'VP豿_G+"!Gc4n%\D<˃spzNyചk2$UB'(uBiHʄ%[HA i:A׎:%y%<_FwemI 23nX-maȞy+uXdR~qƩf(hԑeVVf2 \;<{z7v1Cs6;ظ{XE ~qtr+v1{dOA,=3^ z>/MkZNjB<^1ӗiĊzk\V(]F3 h9m YBm!@[-ۺqX8$,)3,Ԩ>.!#R$u췯/K xx0bQG"`"RSFDD b FрG!eLDIKI^~8~w:婏ys]:,|f>I S21+:4LE)AleF͝QFG fUT 5 !W3gtk>;[3zZ[f"d_~#2R!:0oFM4<0Q*c e2+gI1iMp/I n+i_)33d&>uuPi$] K܈p]cGNи7YŘONd9F2CIo!D \qx% 8y89vd t>FzR%430NX^G*+)9IdqJafXT5$$"~`H&F ]װ&ZKNGr%avA3d$ew-jkFtXkEY!SR!Ʉ`btTeMYˮnbD~1}ad5xW֚e;)/:5}$cebj[A+ʅqSIvX⿖[XiSKXɇP^#1 1 h_-hJ (4.=QL}|)7&r 2u_O5_|{OT8&9}su VWRA{$CZdWXH\=;)(]<|`{ =? GN[lB\:$URN럁]~JѸܻb20Ĩ2TD&~4VQk.I^/C.: L`zO8)"X(DdѲ04\0G׉3$LӉKO-ĥiƤᚱ*5jaN;Ϧ'?4~v%Q1Tr:^-34*T-)q{+zRe!ʞ顮R <A?(tTR:H"i{:}Mnn _N&Yk\9tFdOd$u KMX b'mG @gyȾRi4ll;xpO?o߼Ͽoc?[/@QQ;v|!cTϴO}u R맧7}}7w1#NwL "| Pl&jɢw A~+2vtMۢ gS6&\2)t%d~h `]_g %`KQ|_шJx[Xh{BspD0´rnD%#,hliHT=DKjr)cA2(dh$ O;ocvzc<`t4w)p%ħOEmxq) ŹPjGD!F.M:V@6&&&Jи$a/mZnOrS$>Ϭ7Q*YmgŒY'X׿7fޏ0)81y3KɔV c(ocƙTMeLˌnyhSߧlcj{ ZznƓI>BlX"#(kceBI2&5ʴ 4Gli844n }l0' ť> ZN^ӥF)٬f; 7EgsY V<,m[mˍ_-8cu1.e}>L n+s4,7~6<8 mMm gF_Wlm[1\W]il9[Ȇ-tEDC(G@acVts6)\(wl$5}zH|@CeVx9 HнDuP8t5Ô_y{;r[\ed00um1rvTr6NJ'7e=j*Ot ɡrN/s:Y:{𴭴~+Fb.7~.uV_Nl§EN8В57l\ř@<4[ U`/.f5xp*9Dy+MM=￸\ojjH*뻘8nF'ڣ*fk'JF:7ރ$͞#H>Pi9,]eO^&R+eR1EY*]hw1LZU4Vj~^P `k U cJ#68R$# !8TV, eMgd&ǃ2(:o;cƌL"^ˈiDk45D4Q8k3VI s%O'WϦ֦t!z IMR@,_vGE58ҒPf!roD5ǵY/`su&ђKxc^X5|:359lCK)M\DGecKFdM(" HbrK%J93Y͡-k:42zHk-#z⯄.:(hd0'M$* %^F@HHLBDN&tAv \HL3 r6XGm4!a+.VZ  #$$8XG!1z ",iP%LR#1s:ք8-[Ik%}JYK"wZʯqlP\ urkzˋ|{@Kyqy1Dz8cgmg"iY"M9e#6P4Hn؆n 7ܩW_g kT#L͇Xgkw-icsʫB䳡5}Y' #z!Uxd!R&T1тFQ4`pHn,6oF̚T5׏qS%jܪD\uګ p`YyNc1Z`(3*h2:Z0rD`p'!5rYcΜY N?;[i;x-^Ѣl&S{/Gۀ"glP f WT}k1]wx"s&frxKp3"ا s4^JJ8@|J7лb΂wDðq089+s$ZoK7/t+'a1+?F~zvL+GrG>ɧ0I#hx*~k՘b"7Slzn}3 ÌKn"#Jьd3k,PV8dn 0| &frJqTQ( Ȣ`;X+\$JGҖZtN:]\44F? C%!xjF9͇~~] D{#}fTK!Kt(ӄČa'| `cTڴ6T"B@)‚LVb_@PQy=ʳ{ѡQ نH4"L[VFj0$xvY.yGxYdq`MvNxz:ݼ3]FAD;ovQwx; W7\)#A] -! )&&X7.OgԊRb刺CI=88UuO~8ߜ9n #!ίKȔqrQ[$\# ~] <1ػ*1t/ +#]Bxn w\Wl?@.2~P. u7TXS1TBH-"V@Z8@W Gy^y."/9zlΠDE{Czl Uʰ@+atL`Rc skHd"锈 &R0cb) q~'0.(_39s)3ъm#ۃĴf9sL=ޞco?"J:/ύ=ޞcoϱs9{{}Z>-coϱ縻ޞcoϱhF/;ظޞNwڸޞcoϱa |n'#gE\ )-O x7O_Ey?ax$/Mh1ǚf z$Nm1uWqq~ԋ#Sa_݆<]FӇɃ8gg#fMD2џ!ޡ~x['CoQyNnmϖx)V:֥Kv;TaY`SƶB4*n1@ +~El>ՙ_12%b  hF.]K4ZϳɿnONF-8-]`C 44~jh2jlޫ'-V\-GTؑyy5J⾲?̋Y~3f%8kvZ^,†ȏ/MFxh#;sy5?q1{x1[{U^->XCW˺ p[RBx jKQ^054M571u?A:e#Kq'":;7{CJ׬0 n~gk]Slt_S>~~즸C .M7 Ql1Agzwxbti`/EB=ċ$O!݃Eň^/|@C^h?ze.:IB3+]+0LoyT;ü%,3^:s{CfeyoȾKUzYЇ/bO{W#{Þ^g׹TrΚ ˚>[CũOS,r14l T_HoQO⥈^bH2$č=[5;_s^:' |&wFEY/>ݘzȢxؐ7Hǣ"$^eqtRPK&%["CĎ[6G.c@K_SwŬ17_X_c]c$5â' %cC3'$vs]BV&WԭpAxf#Z cؤFAb@3! Y}:ՁⱥP7(g.v2KþXyJ.3P 80@p9m(O D Jke Rr툋dg3ÐҸnǫ.s 7qb% PsrV`zk)N^d=6cP%'\S>4SBԿLd6"f|F#%)C*kuB~^Jѓ"_W8Mh19:Ҡؓ4)%;dMi^})4LWm.߫7uf+lOBS]ueg?̪˦y΄FZ:G>>_1f⼩']6wJ]ڙI"ΓddW٦+f2 Mz*"\Q |yyZ\_[OY_64l ~8wRo`u|6b%fl&|~mH?zŐ˃=5R{)qH\b0t0l?C6bym@YgKA/cN.&ׇpۣR|x%Fǫ`Eڀggy;:&G:8wX&*d\_[O.?: o~ٛguz7k $p]f}w SG X[:}iuk2ӛYvs肧_Tڞ^IBF#,qC5YiHyX䯧?_ ȡaLCk^V kJgJa@lS^`Ffpj0S(| (('H"uRlSˍ1Mr}<si D4.(kQqdi P&"^\>R}tV{GX+> ~ ʜȇ9h{D~h7Wvwl! t$$wIՄLL꫟B7U?ņz ۴|P]J&Oqs{7bpW7aFgyxh U>8|]->N./l;e,[zTW-xWݼojW5&5 U[~Z5"\\ΞO?iGs(O纃Q=DĮzy5mS7]I÷|%RZX"ɕT=6 gP:ϩSQ?\}sPGYyWΥ׈X<m౴CP_j9)^vlV3X Q-rȺ’Nn.DY\&ֱHe0{4Ro 1}1MNq;tsv0/f(p;$IN&CPyI4f!R -! uV@uvS(t,8O'0&q-҉.j {PB?5,b> b`R2`FZ\`PynLa rX(Pl{dL@$3߇B ިZy@1 q/5kJt0߃B -$HBeww |T8&:| 'ea)v8pH)#Ỹi)3߇B ǥ՞4su#&K ЮU0oDXRؘ"$$D<,־3̊'"JG(sH kgx"ZձP(¼A8E-2Iv AºŎ(w>J07;b%:`TP0WNY gh=(a~]!-D)DC9a(`خ8i` hi31"n pk3SECjϙc्NDB@$킷}(a^~s2!b=6a% cPydi+%"(9 LDdq0r,pu}(`^S]|gLXC@|^7`2>y/]0'۸ @$05 9a(2t1MFUypEJ00dUE , ]{SLUik/(rb32' ' V;} %'sد8gം,`5r28?߇B˻:@\Yk%hLH=(`2Y:NO)M6^ilPyYqOJ|lr> d$0C}(af*XB#a`bK4)hPy%h`n=T[dѓb Ky 4ALf@8C}(`^a=A\ܸGZJۜzc:bGC@R! XMpV<*J?T.M Hkg->J06ř <C-nD&r$#-an %7;K3)W 52FQĬE0}(`aKCB gU:K  >J0O)μVSǑԆA< bxFN]Pyyi7T$ϡ"- g: 0}(a^ #iZ$1D/Fz"@raCǞC| lz|Ӑƈ`8ύPyΊ,&T'X7(_|{mm"ʓC̃ 4" LzfLFmM%(+R)Yh-("Ȫz)P%w)NH P!q' .@w>x@5EDctZ"U`Ҿý&{\kxeN#oS(9r0}M(t1xt=@#pM@WgyPT&==M(t0xYL.2X`13$O^:xuM(t1x;?(%.4s`ZOsC*&:]<2l-6.9rA["d2| ] <}TQc9Ț m@lj#M(t1xX;91ш[ |2g.{߀B'7k&i)SBE$z I=^] ] ^ʧ`dA\F1qҾ`&:fľ,9JMp4#8gOFTmHHRxp:Mً0^|~ <(Jʬ }\/rW3ŝrͽeໜ.yy ~6$1bHiiHudcX %6 {l%x޳?h~*K^/U4\`lL] iPUi|`~k-; C?5tz mk[ MW %rEy0P\, J5*Wnw;4VD9 8xl@̈́W_EǶ4cԷûW"1 z: j^-OքnxrtQ}lo JоϓCJ93TGBŪƢQD[f}1meWa6(ݝ 'X<Ƀ~w’cX|ߜ]4J-?,1^2؂+o•J*jR)O޼9To_YgmF'cT8 R(ٯGL1Seѕwʻ\t].^t&ŀ9Y(|OM1Ek,N;9$p욢j&> +&w7wO¸8/+e%=QF=)%)堼X664_YǴNfIi NI8JXTRIS17TکMH<%rFY/ݏ0Fl""b˔E/j4%wbEu-jS_Oe*.nr SHֿ%L<1ޣfFnyŷu6RYnBJt5('cφ(Ϸ{oa3q|rO0'WQ?еxo $0u฼C7:z㬒kfqQtX&#h=6a]`\hpJ7&uBKDi|4܃g,gjҳ>95`Ax/HRo7yM&}{5yGڈ'6(݉op)mb߹k~#HQT18% l̔@&ehBN X^MtɇA9WP2Dq|8jxL6'SYze%T0p߹+jX&t~k} x!;F@6qeR->lmη~g}rf1 "qH\_ 0rZ$ >EV"W>`LF[KVHj2A䉌GV\ J]<PImEI/oJ``ugdq.ϞJzɰ2!%e ݔj_ޔ2,?[Ot bI:;- :cVS ^;`@Hdm<`h Wc5lKHLzʹ|7am8;P6*m=Իi)??",=8w+#~޽̏g0v16dm+#sC$@QyX: &2f qX0/+Lqt|!AP/]~PA2U'wstYhc"Q'o3Hh!Ka`TC]gÐ")]IRp[z+Cs43(f54ɟB}XZBs"N)}|ӌ4f})vDU>J`esNk{(IdĶ\o%1ߜәIR)$^ي^{VG!ׁ*I$ܰNd^JWy|+U=(/ݓ_݀h4 yJ4K0G(f9*f;Ƀ&hk(7|L+݅p{=[$aIHeӂp`LIy,ǡ!b/iK1'aB{@QYۅbe$cFP$aAq QFI{+f:ϻK!%%B) i#4KըVճ|,2wH-60U''. (wQ@]PBޟ?]S壜*?&а{o>qcI3ȅCza8߾佯V&e,Y9FT?Sa[w@p7Qua:{Tg$w=d;ooO|Ӏr!w_MoyzcnNsA|EŁwA6F}Ok.id~1}[!]Rsb~sPkiA#W9>'6ar[Oyne^?TAAM|{Z~U,.Z U<*=dBIY=JJ\Xc.Χ|d;7M~tGdRGm T7 )(p{+)H\$h '*yUk"}+bVN]S^di ஋_v2[p#ؑX1!t؂I<>$S^hYJG`rIKme4yؖxIOz5dhVdI1]zkE##0׊)$G;x5nɞJHydu=>>qP=C)I %GW q UI;yLo_޷@_ef 'FI|6U5BKж)z5Zf*k ڧ&Z L"Kιh>RQ&(As(Hg8 P yPnE52nt;j /ۗ[~U?@݂щaJH^ ,(@cONjȴrsc$q@z*{gxVI*E͜h:||txrX39U\$#څh˽g9P#4Z"K β o3 |>*Χw\-ϳ6; F> {5e37_W&">hXb#VG#$nUҞ-QR% N~m1'9.궽v GOǫ,ݐL v^u- {en[wkk3~ nܟme1stg!D|#Ww,qb.MFVtn>]5 Pm;T#J|0ZLFqUWH saAy+"ɻ3(Q R!ۋlw[7C:g䠅tL`Rc skHd"tJDĂ)1Zz q/7Rm~sUcx﫼5uv,XAbԮ˘Afaf|\Կ\X!f#k00r ?lm2&|U o \"Qk]CE9-ubNhģe1j$0B={og9t]#9|p:S ;y3Gcqف qrQuc^6/d)xxLB3cc#Y"/Ip0R/u',O^GD"i-/oNq4#R.QOOu]BsZudXDcZ)"V"D4fXh'+_gBvԨ=#-eZl7)Ԯ"Xm-QJf.띅J\wEEȅw_ZZ\e!sI}/O,|qSܔAP Bwa~ >sz9?q?'?&a.Ӷ\\ 1s]}B6N;.UA\u~ m\bvoOj]  -k6 D3EZTeSR ^S8GD}ЭڑEbS&?@S&{- QqJ\!x)@5)zy(/Ԗ#Cw :O$^Su\DEO}8'^ufwF>qtG$C4-Y bm&;y{e)oFަߋiY^4MuEPݜX6~>VGtJiۆ~gHHOgB?'JkXMi>vԢȹ c1܌\(8 f+&aKE$"_hY"My[yÛR۟O^/;Rx;Km3hYty`wtcVZ ZS|壥DM:gV{.:CX(<=cؽș1lN4}7&Ě]],Kn!͍վ'm7,ri4_DdKY P_oKaY4lhr`z/+C^^ڳ#'ˆ^FcD2Y+냉KM-a$EV 1&R㶭@.C9xjmےvÏskqªѷO,] c]&Q͓.hl{MhYv''KUtRfbF3xO֍,Sedn,Y0'u :?`exbXLuTZ&N2&u/4lܖ$ֶ dzv!V d-wDZn#56W=qyG;fۧ~9AǙX_іé;G̫stQLĶYv4+=_@;nL>;+%cS6Qrd{A}bb$@'}=}(r̎sBQО68i 6J8~wtc kr'6GG]iJ_¯q"&g_wrɀ|so_|\o 䒻@g"X0pBej/1\Kʄnd◗}d9+~DSfmr v06޵󅹙\͒h4R>؉ӘG"h@QAsglrD`p'!&X|9qF X[v V);SUR5kr'ۀK"g2lP S 'Z;wnxT תUQUfa{qmQI瑴:( XK" DK=7a $D$XoDR*`uTqOg78l@bwBŽHj:3`Lej(ؗd k,)PV8d^(D[y7]r͝77koVNK BoEto-gw ' 9|$-UZ;J,9(d8p8ȱRpǑ1bb(ˆh4=ti*|aXL ++hm3ļĔ'%n~PpeK_NCၱXhT V0bgN8l$bw@ 3REG2t KIF!Vkmv<=jn?ZRPK'ҍO+YR@dnu-{7hg".H;]~5eBI &5* Gp6![ϢʖOm}]wd=-@-Lufjw =2_Nvj&iv$0G5>iΫ>>:P9.ؾ3n:yy#YVx͖﵍&w %kYΞQ=9H=4+[oF /(mDS.\W,z0!GY&u8 q-5!+? :DD,c8NUWx>(\fR7jbo9R")9&IdYONafXTqXD$ebrǒq/C?ͨ;'Aw\ª-֧rTZ+' 8坒I&S੏F;X?41fAjVj<^!o+=JWBg`_V#.L0uYᎽ4B$\ vet_RYQa@t~5c) xh\U;ne~rzޙr(b:dߍwV "a]JA)̌nPgٵw%Pk]tb քtK [f[ H]*_$. I>Â: ~r+{~ )ɌJ@ٟ濭VR~|r=\+\PT*7>s{4cRū§W)~tR%0N;S~-u^fr5*82˒kn?WSRJXͤ~se B'V=IaT$Ʒ?11VAɒ9gvc?uUOzWN%]tZ\F҃9#?g3ciPk遼u^;~o.޾/߾D]ۛw=P`\*UU9)s l!,_j麅-IK7__ݱKYn+}ԧ/u9N??0uGyJGAȊzyGSxbM yn+2)u׾T! 0À@U\mMacHh~iLiL-bc$IwN`(jc,s *0wHsg_jce~dW ÿ1/ש#qD1q(F( lhD{hza} Vpe4<:2ׄE)ߎ1BrixIޭS*zb$ 4=yR"%\Dұ  |7ل[ii'#hZ)B|gwy X}lzV7}WfmtۂsoE([v_I89 Dh';_Ĩ9Moӷ?o\dƽ _ {3+/$f$ 1}a LzarQJ1>t̠XYBacd'<[ͅR[B(H¶$C :yWW?:s-+Q?A:Ssa3Fw|fY{iP?<OQ$=F!Rz/|7ErAWQKE̔k z3gW.fF\$CXhCX3((dQX~x~ %u dݒ>wGI.g?wS<88Sa* ݒSX @^,ϯ= ++`ɃExu26ԇ?TS$l~x3G'i|7Ӫg?IBFk8JފjQ* yOeZ82Mpdo'OgRHɎrg#r(RlqUuu~BH0mI" edI)gT׸5.>YV8v/e>lSJm͇`{8xdqm ")XrV\ֺ(vzFZ~٤ٖoL:ܤƀ SN;kHɁ}%g9hbxЄ71^y+ ƘczkMzx_v=]Ԧ}siOT /RPjUqRmQwr,IZJX-g%f+֪(vµP>jUd_܆*} ==c{woA4X7,O:V]>˶o]$OY_ng4m1LC3ti_6q]NY`ݽUНL6mua$ $gP,TFN>߯WYz CHWf@ 3ค\p;x[ͦIPtdo񾲻nt?Sxd&Py6d>ثvsWL_laeF_l7]V{jIq*kPuB3Q_X~}ϟ}}ݦH@sb_j!2Xc+#G\D"Al-%(8p%X @jզYh;˙lj]eK/+ o."|LtxgZ}rZN6VPC)s / QǢF([b䆧2^r E6 k#; fD3Xeݩ:]/=I:GQ*[f)6D:&=ք8uSnG?*@ J| nDY1Q-ҽ8bNA> J,0bPBT<)?Oy=yT;HH{jR Xƽ$\@Fj\^f(_mNm㶺+%wb!\s?ݓ]`BBID*Uq_z~2rľ+=8o~7f8ކ,S(%5*1AmbA 0$^sg}p) w~1nR6/o \F(36dëB׋P^0I.˯fP\>ɲo =]x۴><:jaƾ*'NIHRzGyeO/g U-].ThqŹ!d(-םC6t窕-mM*^}[y^$ipV9ފII-q|xu~qyoKwW+dއK)|%7 b&T;;*s>;"F,B0#BXYL)#"b1h#2&"YХzU3؆ 7\L[~ =z VҀuKg_>k^nz9JE*L<)`(DX,-,xGI4 G, riLvuխoly7e1h{z8y-N%=&PfTetۨQ1p'!.X|y⌞oƦjgO$8jHe2ʨ` -C[ 2'.(Tc(3xn23Ń6.Qjpy+jNMf®%A~*Sҁa1Eƻhl DJ80w)^#I7c+5cɁ"~I!U#'Lo[X' +^|{]U:5Ϧ68cz;Às,ࢍc֡#?t$mQtwԪkEFz:S ZGK!3jg>LLj9+j$O*cRF{嚡؆r.ʜ?OèabG֮"[*C@= -,O/ 9`JOCsKFDGx~00CHZV`Ih)&:,A$,XoDR*2S?b"<~0Љ_F Ogy~4=W/:UFIGjᙲL8&Mr iE Bsdm[O{Dx+L|l !]Gf_]ph}s Qh= .a_t`x sZdsk,QV8d^fb&OKM9DŽRn&x9SGSI5w9l1C"V_"D4Kzўsq:k']={_pE{[~0ź{GbA@w\ HYuDGrMHv"W1G1Qiӻ*]IׅKPcޓƖ*%Tx]2xF\* r\FqdX&0":u x8 L܂ f[b(2x쑣Pl]bbRIaSK@nEE MJsm$pAl#b;8方1+ >Yz4G{Cd %sjÑDF8(*ϱfȣ)ŕ2HsMd$\.%6!`BLJ;f2p µzqy1Jl8-F 2i尵`)Nnk_=ݜN~ 44mnz|aYCna5&LhL*p[H;'5kB$W' 8y87ߵHx>vK u1Txf^ Rr ̰֗4XZI%HD00  LlSbpǂq(Vuham!X.XjG@-% Hac iBCDYRܳAr7r=?FtXkEY!SR!Ʉ`bvnK1YC;XAX>>3p3[ҏ/Kwo.SYq.#Aڎ |_\vT: pUd;A~|K]SZtXؠM)7~~ݥ/T h Kq !+uc&wt\l NW1f%,Mpz8O8OOMcEf lwy4H)0&7:fr0FA# C1ZDa`dD#*x8e3;AXc흃j¼Cr5b< [c-)=f1f<)Z\@J"X  6p:9j*{mܷ'#[;].*ζGQGeU7FY-M:\}a'xI7e)1*yt|E`RGSJmΌ6>邼)}J=Ho=)\[z烒63Dmxc8]FF&f# -F#!<>-ɍ/wy3,* '׶҄H ǹ[BHT@=}JtSn>h~jYc10ͬT'ŋ65y}y;д4N[]غjoir8-C0!B*Caj3jX$"﵌FMFS`[H|vk+vw6g_R=\Mԁ9t5Xi<"J1G,DngZ{ȑ_m>!~ >mmd#N_,jI[0@^jMR,TxQd/*(Q$߂h>CQ `J'ShߵzQWn|T-wU;ʬwr}gy]Ry NB]0؞hF&Jwe($iA6n4S{SKژ/5BxQNĊ{C6f4%D*@pjT,eCry QFI{ R]XKXZbiWma{0\N.t&^vD9QsN/KŁ-R9QY8W*V22>Ji ]yyg~49~sJ$ Jvڽx'ej]gFVf'޽}#76m|?>Wum3\u`3O ` Kϥip[*BwGKVXrDٽM++ Nvy0P}etdM[*6^N(fY;9w[$;'Z.γ.&Ys)Fi-EQ" ZDAj8-帥Q" ZDA(h-bv1T?A%g8. Uj]Qm$"*NYA +.'GfY+p1I('6Dxd,pD%(/8w9Dhrݑ09!փ@XedCc9-q2qN}Ѩ[ruۺ9tt(t\R0 `> }fxc?! y7OUN[>՟ɴ$OO6 #)kb Vx}?埞t-ء^nG-J Xك0y,;"EUS9LSQNWOM^I0:7lR3BDN HP@CPL(IaMQj9K]bw֜!V\GWozxrex[ȢvY^|q^|aA˗o[vMƫV@32v =HpN<3)%MY.h¸lpxFEF b-.DY ^d .ſ9c PwJr);E~"SwN);JP)BT^wN)@eN)6VDANbiE~"S-vJʄ`E~dU#G)RRLz!i%@R^B121\O !2ronsew^_2});EzH"S50[͙{a}e[[u@ GVbyelB~EK6اKלKsfE8 5^Yv<ڃK{VK,I!cqL4L$5@ DBkK{oF4@\tZEѹAwxpV[8QȄQ M:kΎ!7C9"r'/sSE\Iz?& ec_[g뇇!ٓ:Bݼ>hj]8Og2ǿƸjtrp0J('䚜+9Ď N\/OTͳZUG'$:5K '~RYf7W\,=RO>5{9̈u>~fr|ai ~^گa/c.p Ԭ.p>8|uxnݛ4%oMmaZ,C^k`ż_ w߿ٟlַ2N+~?|g6_,pjs7c\R/GO ]MO:cRA7T*qNߴp8Ͽ}g{Οp&g&g77`էvu~C7_/Df7s|qq6aُ`q\~T> Qyư<šWfvx.{L]kr{^P2KF 4 5==5>FiM<.ל>#|e8U]ԣ$`]i^ g'30nsc~v0cpVD<1"WE 3SH:uxA8:&G!\yt h*hPf.D@?:%8ёJXզ#$GPgrNBJI cAĨqAq'D+h}#>=k\]}РF,3\ p޿S.zs|8̀9=0"ALjM4o q7 gŞlG[KwL$&ۘ=)&5T]-! ʰLxsVx:M=ݎ*=wJ unD褠ff{hi/*7aQःw`%xo* F`QFg$7dpkpmP!89הRAr$ ρ[bO/Cgs㉼ ]2gv~:j%O!g?L.JW=hZkT|P+7cL.}Z`*ЄV:FZ9pehJ}-ǧkLd6jJbX^[%dSL$QN_ܜG'3aΌ-fcݨ&׍в\4˻DGtdOedf{dkX4:VY]/.%邖 0KJVyCErYtb)FdMr=)ZcKnѨ_~w֜g},b^}EM}k_Q<yZgY0.Չ)V[M::ý[a-=J8՝0_Hd-|;M3)0*^V:%BҔ=rL-zHϙC04!GMP 4ȸ7 zl2 ]-o m@==/@g|8#NCQ &s'R5|R4Gh$ Uɠ1N? ȑF9"pf:g] V*FZY22qJՀ( !`:lKh7AqnI2F+vdU*:4SsA'ZJ&pLٻ6d elF`%(P_$EġHI#:f=5ucikFh*#/nF$c(**${")8*z6R69\NHCۇ_. -C[) "ԅJ2r,si5土OڨNၱkT V0b-Fop./n=W\=% |E."aQHFM:ޥL+ Fpn:Wp6Ke,Pd\>NrvpJ_Sol_65::TH3tfCJ٨.yo+n8u~/y (`>~Wڙ[FH ǹ[BHRauMv]P7puqY\y_3 tcl j!46#HJPM[7,^,&4L?ىKR36f'M!C0!Aa S띱Vc&豉hj4BL+ٺϺkHRpf_KQ Jm\EYqFB:Dlw_CR(, Q;88 7PNbXXHe5"vdrɮrhmO{d|3Ϳy݀''',jѰ?0U^Qvd>Z" )YěS0#XP1g&pmXb;P#Js\auݭ?'Tޅz?_b*g7 j-$* z##a:0- 9 {:+$,r!c)Nm r[ B{iLv' X'c/Hƴ (QHLd$&A:łKsg H\D߆- 1wIZ'ig-a;ޱ/p&TQkg>MEUi{bO-`xf-2]n$9K#j8gvUt~/ .)pԱ8D'<;<ؘ'ġ*)a6.y8\"6ӷ=zZpZimft][ͻnrh]2qY>2 :zcw1a9Duyzu@AinDgWSZв l*yx}IKdZTwyէmgsބ7ZB%uk:.lYH7efMM .9PgᙵyJǙ ]fh5J=Rn=wԄ(br@4& ? E+wvx.92_繫U}q!DRG#p&QTc͐G!R9+e$B9 3`gR핲yi2~'K^< `|VC eRQB9\"CD2^(v}*B]̀#)`D)+fEwZɔ?``AP$@2A XU0H#UWM?S;Kmh['h"RXC8,.hQ0ʌz_tXkEY!SR#Ʉ`b7B)"r&kJ91hXqQbd\4`ݐKs=E?_Q!EsLW=cX3S`)"#t|.cSo (y} `N[lBYr,k.G͆o =wV'W?K\4|)u~|fAZFN.M62"}5dr5=TI8vG675͘mS?V\]*9٭9խy 3Ÿe՝o'wXf\{e;hKs"`/ד2+ 3oXI Ob']-{ Y|B чQ@DdɇMCbUojr.}*ALrS͕SIg$-BFhd$=}8vWRDlXtcSWt*'~: ۙ.ϟ>{Ӈ۳}>a٧ނ P`\J{t|!sCX>>sS[/^Q)kN&i9bO_%sy3| pUQi-Iٲ,z !ȏ~- kMъ9 6ynsK2)fuWT! 03X+}ha{{1[S0 $eI6+ }z2]-[.2K)à z#ٗ|4X?6o0 +GbȈFT:p)̱b?XFGGM!FH]]phҴ[itKOj[~$%_Ͽ`rC9]M'7)7;+IEϜ,φp=twohA)閏(e4e;| ,w'Hf'`O03hmV8 xfx=u4~\"6[3tny$XLjA=}L*C/?MY?—$gIG8跛Q/Q7w0(G z EF2%L 0QI@6!0`q(jQ8&:jQÆc]h!TfX$ϟ E@b6F#O%7LI՞eܒR<rd9e,h=84S B  =8"e^ݒ(b1*m|wzGǨ.|z=iC"(`lQ&369^2xFgr`hT"VW)(P ٻ涍dWPzBU~pI6٪8}8QUbL4I)?=xAR"HQTH4it|3u(rne:2,"1LkZ Ecӻug.x|C;١hkv[siء+لCLS kHNVkTgك#g!p~aS-ɼXV#iPfw9N3$Ռ(^t(jOCI*SHMLc(B%Vn/6֝xqRhv/r`)`J(Ce0/E(5Ha,8nV^n!rL"^B:D5L#AR'ap@zseV p#2\*밊ƥoXEd:"|]Fc&0[F4Vs:Fdc(**,{")8*z6rbB.F\kcGdcPV?n#ڋmUdoښV[ jÁBUEY t97QJc1O smP&qxv3zr(sA.QȚqt Eƻh F"# 48tt2 K^I7fV䑻ߒCz&5D)*x`"٪n -L?bcLnIآGBGDk?t|Q6>-JzV*֓M̐B9 \(wp)DJ1qV{tHbaJBr͔%qϠ\=P !FlU+IOH>:&jv Æ#m诞[HGC{͇vY4:la1Ir&5ʵ 40gmaξuT˨s+i[`.)pԱa@wOPbZIj8Ә"iLG aj3jX$"﵌FMFS.Ζ3o~vԁ`)2^I?P-P,UmL)Lf74]76mW+ҍ%:ke9Ի|`\IB7ோ 㭯6)˻,]Zi)PQͮg77Q*qܝaiorusYJ~/n6/uCބ~o9ؽ?s^7U'nW_ld:y/Vrk )nmNa='lqB6!1n=wԄ$br@4& ? E%k{]SkHv]9=_~ϥjIEsjÑDFRk< yP\)$9g;K5G,50 ּS0ޏ*`sIEMBt i$FpMH*|߉ :DD,c4NiۛifuFG*#R"%ѱTH Ufi%MM@uzu8{o?9&?Y/ qp |.}VϼOU{pu R)뻞gAV(v̟zd=ɯ@A2֝Q&jŢwۿ9yGSŶxbMRmrs\&.Dx k8;V#pziM܏@豌1f%,Mp8o+󺍱";exPQC{,&jC3L%=/9(,L" ##Q`Ha4>+8e <:2yaߎ1Brs=pP딊 3 `xzI-W. %t,H5pl7ل`GiL'#hc\V$,lE=٠=oUފ+4p1=qǍ$W%FפʏV0#)6gF\ĵa"` d&XLi[Oʴq=>ӽQη|3Qg@j|Dswmc!-Ќ;*wR-{IVޞj^{28?}]g1ɧ]6-wZ{y:UDgzI!Pm~/ks^)^ȼxS_']XI5=]_> `0F ;xj'Mēlndwz͕u|r%."Y:{ l3Ow'H{C'%ZCɎ9~Doj~UY0np~yBOٙ$ InYdqؿżO^,~PDϭj1NغS2ŋI)9D_(WJ@#"; asr`\\UWr)PFaWHi`*iwqW K,a_Vq0>:_/Tnb\+3x PH3D98+b!XC> j-LO~zM{M "j_6(=ܗW#M/v/O>']<cQe|4݂(@1ʿ@L)81yA!( U*Z6&RWm vzeX霾f8ٰ=Jt!ކX|4!I8˟ob]R{ZK%wM]d0ή/:NZĦgӡQP &U5Xi<"J1G,N--̐$`F <* s]`FʩR-VJ\ef Â\7Meʹc8Ԙ<<kQct^oB}B))&SNq `7 ,3&f2ֵu-Pᆚ>- G#W_-t?Ѹe E#&o# \HZx r6hd#H6#"b1h#2&"X>z&ؚ4 #7 Ɠӫ>wR9ו.[Q9Feݴvsv%T8 y0d4^JJ8%Vb%0linȲ TKw3. ͭn0-c}3{WkջO&vya.[ ׏lXQ(֖6QT^ML;i*ԩ{MMA1lhm%n\GwzcrK!ӭۊu $%T&uj>JF0#=w>淸N:mhsta2swem$I3a<gqX!)qMj^Ȫ!EȤY˰-*gDqOܥڑy0AusY혷1*2Qy#󰣡8,x6Jk:u":ס諞 æL ( E1o g_vڍ/%7c`)Sp҅V\XT^*ixfQ[Ƣc\^[6\Vd16y>MIQA)ShcnEL0&6t:̋/^bm:[<l25u!D 7$2Pc2RSCH<(A ks΀t!d5TC:Rb&y$LViRe)A@kYfCliijY!yZԓ+@WzsZzN&IOCUtGaYhA.lcJY&u( q-5!+ :DD,c(Nhy$hntYZ-G*'d*9:I$s2JaQiƝVc  t?H&vsǒQEgcVunfmC!XOD6Y('SJia`')ָPbٝqolwO\oV*Ihe=؇2,S dF"rq.eͥf֎.fso/HO%Pi]tb !r٪&dU0쯰'&9~8|.m/?)7z`fX|\o.3Ac*WKA M3&Zm~Yq"\@r0x4* cy\O勷up3Зޚ/q oz5kE@j_erw7fQm-8rkKoo鶩 m:Fa<)IuÒ$d*xGh_Ueqzw7@^_z+Lջz[: ̫.2|-wrj)0۱pRE8'9ӵ$f~?mu%5 \]E#Ʉ_گڞudޢiX\ɠM:fLUBV!MusacԑEGDl N%,+MoF=_5])2K@)à z#@ٗ~jf7?O&wWUrQPL" ##Q@@c0) @'h8OG;aQ]y"㱕|8> d0N,ՙ _ I(>虌éu]!وx4+$ >6]S1.AΤ\FC {?Qh|"U x!^`W `{;㰶JbaXb冀p Tf(Ĺ4F4%}Mm b2!]cz3!:;Gm7ahnp–!ݐ>[ǁ|j7:\ҳ [ ]) $?|g:>b0xmrn"ئ_qTO}vwGX`#*V͉B/kM ֭Ak"hKaW^8 0}?חՏJʛQ9_lhH DžXXBHT@=m>"t~,Er" Pn-<-ڹv@iP N%\Es7x5/zNe&#QHYu2LwZ D佖豉hj` iȑmn~+ż'"C*:p5&+#G\D"AE%2mX`Hj,9N`.0 |# 堵2,,VJ\۰lϢz$W4{ Y6:yICi`Q۝?hr^Soroದ2lA ,?Mnmc(b +B.xeG& =,  k' GkQ]%b[Op).,#32x5sf"lRwlٿ3F)g Qơ{μ~/<^Q>(7d^fΜ}&G96Shd)!fDI; fY!#a:0- 9rd-/ZׄrpXBg !hL> 'H @ !S!c}9Р4&=9yWcZadg($^2#YobA% 9Ij$f.Ec =s)9Es "w<ܖ[TpLNv*799 Zhve˄v5LjThh!,^z>|G \ꢑG^Ct£SoB4Ȇʹ]58swe 2SW՞`}]pZ֙i-fT]KUCNl>ҍ3 :Q.ńVP@JOa:dz=.:gJPrqҾ]U]b-j^+RXowX ͷB{Ğu{*WUN|IJ1?h#۫22_Sq˱URsu30ε;_)1|d:3dCS߄Q2/ sm=:ksAɧt K-~sC5Lz_\)8BN1V!qb4Nb!WM'_J6{0}6%tioXo߂AW|{޸nχ #z!Uxd!R&51тFQ4`pH޻^/>hA>m tx7_}yqG\D[6^E_-|t]d盞o:7XH>UMFc+Q"!%K9 Q Ƒ|| ߹l֯?.#]tf2.x. + j6 eYjYE&E#5aQųi'/TZ֩9IE_-dh6EL{\?@yU(j8y3d8QB;g{#A$cpc,^ 6R`r0l_ls80٩ySನh;H[bw9Q4IuJ`H=ٻ6dW7#W?M $uO0E*XVzPH-z {OUלܫ`]T|.m,D9d E$0R!]u $g~I)sz6)r$KIЍ"=)#:9;gMYN-nӕY Yp LG_(!ɳ VsjcDd (R8(&76HO/&G ,ɦ(;0lr쓥RbH蝴:H^C *uKdAq&cҁf>CsGfAH*<벛m2"#\Vˌi-k@%Ufr9 u5'lФM e, #B9K$"-YTLPBB=q %"]W9kÁV  !kg5mp~xr;ZoWW"EEd!ABw|zip}ƨ=`r<`V254h :kē.xv̍u4R6>x%QLZ0^k }Pv0^׳HM\b wI! :!S$Ғ9N!6Azkt;gkҗPj*,sR,Ǫ6["}ӽݛF8y;(cE;BV+hiZۘյm z4AmNGuJœ 9$VƣMX*Ls(˨VoNx]RR5) ѡVʀ"#, ol:g2hȣcE:c`YWp~/ɗQZ4GroZ)Fz3kgRկW,[_r/C-xN|8;֬H7[g{SϜ_R_q&ѵjEfz?=](/D= f ?V{ 7=Tc]MyQO&dd?MJk>|Z҂YiN_Y^zc/:e"17LN4^p(\ob^tk^CNRdqi*n]L% ZG'#d2%MKc_iNgZ^|[/|wU3:Zx4^@˶/vj5`iq,%Y裨ۉWcګOx.AmS rJFre#DNɘ8j@0S`$]֥ї %+!JJ $@ {?׊D]Ĺ] !ه!]3ݽgBT+%MݾVc\v}p} ͖ Rsh::<9Bc9ύdA5&gXW"cԕqGA]= Ę 8Ῥ w1bJ(y WɎ ue)o4\rUQ`'(f  *#~ҔMܧ[@ln(MQ! &CvƥG|ҟ/h3 鷡95N* \y V&ڔgLR-eo_U>?7j;P4 MXGfUy.0:t#alt"] Jxus~z6~+}Zl)&O0u8 杵@*d&lRN(C daKU9B݅YI4AyB9k)7*ۅ}q6mM83Qm'%F?.(>b.54"Lƺl 3OR"':og]׮)|l0qqj˟2侀jvmJ>@/|hLZRvlv s`]`*NN +$IkMZx01mɑJIOQloe)jH:'yWM9mĦ' Ro?D\8(im#.z?P =ys%ŰDDNyog=Ɖf7y.|,c=,1I?ӯ7C.3m)V7J&w͙, -=y 9%8D+!n2I~py09I l3;7i$Zrz@.UufrɥgEҰ-<ʘzF.saԽ.{gHfD#j*2YvvLRAR lt59m]Rjm]/h@KH/Դ~߻ҧս%HM؞*8EeSӱv9+"R-S7\ _6JϋYZ!~0OvGӳBp |:MNe c/?B~T [a1 ?Y47oZAFvlҮkvVvilCa@`]\Mٍ[Ü~+8bWxnϣUgU jF9loZ&6?^1Mrg&X` 9onP;I?_oAI=B% t xd`Rc^! KQOτMkc <#9s@Ť%#5G :Ȥ%$A>pPlBM#wtJ`6J}S?I<411[Q_FֲӋī[&JZd52w'C~,JL1*I:o>E;y+mFj).RW/UEZ|_ϡ ֪rUUn}庥+^{psHkGH2k=U 0*MOgtAF!S5*r9獱F圵 *y>tAp_D?nEi{c Ҽ\$ (|-,NZsD J@)C}.jptV R6*-ӡN+(gPnijzn*?jqɢ}W*;̾N_\H 5#Xh\*/Ih#$&D%|Ö r5D%T;-(;:RG` x"^sID =-{Ǘ"M д85I*ʃ2\p^CQRţ q[E3Z. HvHhy&d2-E-:UuB(QC8qa8R86/i|b(C*# ޙ!e{Iݣ5 Bdbs)H+iL^"agI 4 Ebަ,(<*54.=z^㴞3[3uO]*%r'iζ.m6vOg'H)x$+]zyt{zbϤ7cݻYO#+>@>7g)13ol|ww ]stv{&6^JWفXVB kڵtD{`hzHM*.BHZ/'o;W%у$TF ̊hX1x#?@~$S&2H]uR~[T`RǤ*RMٗ醃tּ[GKD`*g)@PYI|263!0NoC{E3n+YO, [sg}u9}av!2JhY!o/m,g %p) bTm⹸(0(3Y`jDzMK^t|К}&cr Xr8?m> m}Rb׳/=#`6dـ7 U!r]Ed 5A)B3),C8,Yֳ ~6@3g[@;̟wk=<[ -|A<`Ǣ_Z3Rj.2`<0<-:=2P<3(v}ØybjaګɪtQRO:b;m 1 ?фRORn|tO 5qHU ~h~ ؘrIh㻣 xk4{Xg6LRbo٠NiPn&1{BXIl^wLMXOh)k*m(LD dj}5;֡iV1UYQp-tpRl U%xo9I#(3ns0DP(T% R Dn >7ߧxW.k6^V:_qSCP# ۳~ ṍr^t!(hu@tNzWi693b=;;l$<1&FQ&yX&E Y"j%I,k N\-)c$pPq6V;E!2a*y) ^FQO7[7N3%c_jŝ_(VDG(XV30|-wfW|oIw49-"JaSB dc I#>M| \-r-v9ԜE3xddJ <x`!i(Uh|DF qcPȸgq) `1"rQMhjJ#O]/mk\-]˅H~@lW _iOO VXTJ&OWoOO?~ӏ?}D/?}G:XF> t'ҧvuF[ѯW~>ߐ_{|q+O_gu< ҏ pQQ.7Snm!ȆG֏:|soo 8['0#_e-V"; $Rq]ؐpkjs:J?H7z˯k( ?[@~7 ՓTuKKp*Agf,2J*4U1Vi9#`e t>#:a3sho}T[qn0bGGzѕoHx?OpJx;brDg"dR:QMHX ΙQE/ "eXA"i4/D<F1.K*N)}KbhA;(b&i _& eOV)¢!qPD XHWJ`U Jn)` Ғ9%n+(g06`.ƌHE$AKH.8!(Q{nŭPc-7 H;$^4=WkթP/z]}su1X(uY&Jξ D%@_9Xʐ+$t53CTԏҢOڱCEz!?MgWQ7`%d:ID 7*.]~m\ܪ)n~cZo*ެFD&gy4-WCr(~(~QbIh駯[8@땸Q$zS9Ki*J⫤ d|>w>ߓΜt[_}~[#eEPck.o5?:B;_.٣>C^y AJS ĩv%oDVHu mm"X5t><ΝN[g_¢! %nĥTL$А}'c Jq&JIaQ娀MsNQmcQ/-~։Dem|0`[\X(;QaLԼ|W1tWUBdUGf'pٝBڶ˘nݟh‡)')u>HXo8\*!4?w%) wt ŭvl$4c{Oх 5\_=3&IXGlPt77=YؤI2qb;dx456u&u2X_O[4nwwp򪬨pp8:8)p6pZKi˜BX 9Z"( *GsM)}M{"xLNЛScbӗ5z/z)䬃CR Ѳ| QF5~F.gӭכߗEȵ+ U tr ˰=K=dLd6jj%ybX^[%dSL$Q_LΣnTu; xa_[|Ň^-ϛ.^yOeL1T, $KB+E2ܥ$]2AfI4l oHN;kTP`u4$k%NR^rJ.F~?hgiRj}ٻ޶,W¸`L3ۻ_X4ql$S$%K(2e)NB[Uqn>~Xlާh ugy(R1Jl|^g.GSwNwQ*WdH^|=EfFE}^ζ{TnB >/#&ĄEwQN(ffE(|1F/+ut[R[V䳥rPS- \QpԜw>sgiڬQ$b^0Q#Bm vakK^~\^O|LF%KV߾99xJxKod`ϼNfw\ ;FUӎ`$JX.+G$SQ{k1`dDGwl`tlp=>2۱SZD+K(HY :Fp,!`$nKߩ~ mH Pk̫`]TN[HI)C$ $b,Bh~P_:e`qvKIοz,2_(;=й˟3:ZVEaPA(ዜZcx *(NaKe%$YZ>Ax{v6Qdysŕtnͧ>upl\ǿ.'l͏xjߦ$($UV*XHsc`9k9-7'DJ %~WdLJzBB xE%P6-[A#c ~Z,BV9iGbThOWS'ߞ=l_8F/16[8~%JAFW琱4܃t"XU1`J8+blr4Fj'w9@iʈ)ZM8{q.]֥=#}BmЅv y vjǡhD_ֆՔ!+`G9:0Ԁ{AՊ~!mVRs\3[7'11Yk\lt6ifpL{/5 t*_9prgUV&ڔi7܉̝(܉qf E"fbTJa?p$S^@j$:~kT옍Hg:TIVeY2,pJFRT<3{#yi^>"t-gtm>7T8U|UW-jW*{M(j%}t7 |Tm)/f>* بe|^eX%8RTҁ̾Eæq=LGȇ[ɂ!apwd}8?l%3єjbTܐme1ottU[#w)4 颰Ҩm᭑\|S d ?LJ|p9&U5ϖ_89i ђ'pW7:cƸINKHP~IzKM ߿} /6W5ۂM.ݧswRv<iJ+HRS sƣ"NKFӅҪS wzbvbjd0G?s}~gDpg²ޯy`? o]vs> 붘zreN k{%>^,An9J[BcCörW@;饦joSy;'u*wPUI9߃dA0<` 퐫 ,[2Qa`Z>N1ۈzI6v1#L?7i Zܝ#AӭwG쯼f:K_cgGxz3ƊNLh' Z8v11&o=Tjż`7ޔ)Q? cÄ}b6lb#yH:]  [ȋey5ǽYXZmv^gͥ(k<.?|wGMzr1m^0Qsq=iJi}{wzn\^OM0YQrwg6rk4E l5=lƼ1}ČC S +hYt&mݬ]0ƾFr!]}&.'VJHQ™ B*mXpW.+fÙƘ_lw^ܕ" Y\0uô\[Bqٌ7|9XcUwTd#Z#(lAY;,K'@!5N/˴[64=Z֑ŻoQ1a s&MNytFV(N|+8u\1uͲEwpOpl\+}v=>m^P/b>'W8ŲKV$Bo,jÕKJEiTlivz^[$Nb  lLj}d!rVR"k {GMbS8& kbs-8F>) )E1`3,"QiD39RYMbi 2r|tRRW:KB)",H{ )2Q!Z$XJ$D%!̘fSZe6s@t I#JGʵ')YXyWSQRSc jCfuF`"0Ǩ[ C 2ƇTIbF`vB+Y@ {CV HزjDZbŎJ|Isꔉʂij(kZz'^ؠǟ0.&T@TIY(]p)k) `JM2d䲥1B $l[uȾdԚ>aTf*YJ;ą3ì3}f 6$⺨p-%VI,Q>@h EHB_ p ƒ`9IT* S|ђ(:>Zb)7yHrI!;(`Հw|8MC;4Pp iCh-Yիa ;_RQ'D:[ <)0ЙFDDMģd,2q~8+)$"7%!@0qPseظNbՀueP\ 0?|o1AAOCwM!*Wvf !y>oIyfIBb}NiNah脈]kO#I+|]i(i>[FHS*ų lCc}Op 0ͭ.;oF޸8BI1YV  ѩ-C^b-rI`UWuD8M2Xoqt/{έ]b_|[1#9I`BTRXtD1cy}Ͱ`.cřv> ) 3jYSϻ uئ:/`:pĥ8(}tmU28Ut)\]h)7VSc2(3Hv9-ȗXP[ezPr ҈H1d"e+f胕EJôL燍Uh8{ ȋ ይRtK4К5'7tb,o+,4LkTRp0U UmeV˶P oT[ 3]$y6$oYBeƐ%`&k1w^y^٭0:-]<۴|VV14f @][ qtU  G4T0 0l7,E4h{y.j #e`09q=)oFtihGc?05NJdt A(P`AHGdi'O H@X 7nkdXhlFTO/+lE^H"\x>W+f)oG]u63+v*ĶC!DE HbQ<TCg=kU4~ aQ=+NFp `Niu)fndjH 5j*\#|rBXH#\5%i\4 Z`ɑ]r5WhK˲m5ʾawU\@`acNtaY jC/jC ~4S"p#FD>8=ǨXfT 9$bUrDL0K1!;Fff%Ф1%\FCd]Xr준)`$di@ui,?uj+z3!z crTW;/tMhW"ԁ׺=vtdY_y_?iU;amC5 K:[.TKHTT?HJ %VjNJנ[Q>( ys@u#4TJE#/ӢMxJV-J ^iC%s 3bw>ܯ=϶ HqUQK/DEŔދ|1dw k;!],^ɣ4yfs+V䲁}" ˮ1RMC#tRy;T2״V!6^SӷaEH آ&j7']^UU"Kuqz<98k汽<>O@Y^zjba&@/WP'Z^cBj/՘TcjLP 1A5&՘TcjLP 1A5&՘TcjLP 1A5&՘TcjLP 1A5&՘TcjLP 1A5&՘TcjLP 1zkLHPUcI YcjbjLZvjLnZ{MJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%+V@0غο%aWJNJWmj@ѻ^$U/t~çNufҶ,CULw&i4R!Cfxӱ^]c5F;f)@}DΣF(Phf+BmSw3+xF4j?1Κ؏q4snx= Z1^!%.ix֎}3هdw|kItTG92۵^&GL^NtҤ&4I'M:iINtҤ&4I'M:iINtҤ&4I'M:iINtҤ&4I'M:iINtҤ&4I'M:iINtҤ&4I'zuҮxI:tHkv]' vc"tүcWFhR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^z&u֎![;\~TJ^,O0x,I%;^p #\^p 2.K/u\w{-_ꖣ9 ?#4zJޒ}bv] I'C=>4(XL>MO&g~Û :ֻ!ސ)/A$tsVct (+IF,l (ˬTsVȜSqʚL{|MgAcZ<[.Ϙ[`ՒqvvxrP&(0I%I/rD4tPfS\}BX-Q7\m r?]vl񯵓lp˛e77^^^|0=r⍋+ek9Mr8']i~ ys3ݕ~>?tbUnRy>lM*[Zi ]y(_~?[ᕪ,:I_k|:t{9 y&:0s訙+^Lf~Y4:zd;mFgjy-M}F`fEkf0OkUX\3q2eE]V͏CtWB\m~\i&\OZYΔlq{Ty/ g˖qfbA䐙*Wkr6lorڸ\+5 MKh8s#̹gBϕy>^O Ug锠PugP*D Uy4x1;8\o^叽=fp wpT+?hPѥO?uK_dw)/u8(;whÏI ܋&Ukb,3J Tz W .gJKP1(ߪVlEJ3e+=9U)}+n);S-+=z_f.1Vv%UWj*[sY3UiUb,S2KZh;)D =\{$tpbm-A{L򵢬.y_5⟎Y'J RЮKudh=7]]%},ϧH9[{ ^xETJMIHo}M?, v֭UOfJ.ܮ5J2P:Ov[Y9ߧ[?<ΐ|mRTIL÷Oɲnm_?S՗DT*qے>>q:@'CUܓ._J봚rytkc\v| 9vF17TJLTqrgq[u|IŴ`kL)" "%\O#DZi";*?3R6^hl"[a kϵ|y$MrENHgFHd?Vj9)$k`1OqHߴ5_?rNT1roޏ7̓n2O2nrL֡shSLp%{lC.e E9_G{w] +V5#CaDmIjե dGg7/*f>jt6Iʧ~nc29} 9%_.uy}$zlK]kA?=\;]MU띾rRH9y zQ=_=Y^ޖ~;?o|wvzpuprGtVc+خ-f?;=_T<gߤo:15r֝gL\f>Fa(L>~wNnNol^зݫV^1Y`{7Ij}@"dl܈cC5>æۓ4=>;_~x?½ow:m lOw7`SiZE~h|7frٻ6dWlݑ~1v.@=X'ؗ5JPBRz8JCȑ8ڀm*)qRwtP`_gIIU].>`Y-ȏomFVtցCxb VYWld\kas\&r\wK Oq5ui4Viݚx\|@њM-S(FIN2<΍'P_o#;exPQC{8Rjc3e4(̤}cqD1q(F(X`028x8U?AXcg5aQ.FH!<=Xk NT|lmU+g"*pF i6JSml(HghXB})n`P>RL@ H8KbdF/cL%նfl([8c_][օ:'Յ ܦtafITܝ9`R F _5!R`Fpyͣ3LxĜ;f9Dx%9a.$U42XOMa$M* 02.DjLa[v4+eWvkܱ^Ԇ޷&N/ł$,JBV~Å?c`p, %`1Xq (r!c2I8A;$ r ǐE&l=rTiL5rvҩOT*hFl?ՈeF*Uh : ĤN 8Ij$fXڈt>KZKIK"dKGu.>;-]#彂z4qec[9iSȥ6zt{tr\;xܲo~\ ɥ>;@d&r ?;ZF~'5QY %ZH_ѱD޺QPrQ]Qϥj^.^!WO7MgM=&G/0)LAghѕo?o-([^þ3{3vy:3L!%0l*׿B áMMFѣO \0sozo^Lż4\0E&XbR:?z?77[Me%TRCau#߬DD+Ӿ:\d%OYXE$"_hY"M%)LD3m‰\OjS‰J)Ma$tФ\OsO2uu('NRQ<`8o?~bκѧy9Z({3g}r#Qp &/@RT_ ! 6U4sxLMb ʄGF%L\ "NY#L"3 HjbZ!K\Uۧtig,pɃ*&K .74 0BxϹ1V+D3 [RIl< >] y_xXjFN>;K?/i5;~gQ3wFΨ9â 2KFnp.7f r3\ǹ.7f r3\np.7f 6f rF|xh^ɐBLNHGыqɾzI֋Y/fp n3qNﻗݩS SLNf"S5d@%^|sqbX=cg,Ş3{bX=cg,+"xҡ:;jB+b biMT$p .rA)X$ǢzYsJ0~WOv=KO2X^!BtP-ut8$s(DJ?=~~z?ś.0Qo{7 V`\%L¿Z\ZJ?~f'.]_)qROqW~`E9aނodEg847*`hڜuMmNf6eR,u׾T! l pYc1׷8I|c(B.Q7_=)*zp WX4TG2&nR5){eB/]6?Ru/jܠ/2)Rָf<|Wٽӗ@ƠAW`Xo{\@U_"r&X`T9F$㑅H>H9#"b1h#2&"Zuqsm͍č7yK7S:yŴRmHA. ̊.r妓r%T8 y <DX,-,xGI4 Gڐgy=XAz˟&`f{+E ZF (2TlS|9$ew䲴d> Y0`O07<,5ru~dBB 7n{tQR6v7sD۶LWw*¬庤6BY g8/naE'P_wL jX=ށ޺ex'nO=gV uu\@`Z.:6V=ɌM*p۴=]C(yvM:цu}j{hו_o(_5'O'-NZ- 9HW180DZPMMJ72%齶S1fnvDf .7ߥۂƞW3۸-H6kl꼮4>^T#+DnqW*KRpyUB Tq&B S*\Xf _j\-Wz^71Aa îD ̂EGiSFG*je%Q>rpU_LΣnT0en2Y3Vh󅹝^UM^[W_D&QͽJADSBU*`BapEJD1(n!1VYXh PHAsp6`#*$;A Ut#-Y_AT-A-|:c=S ٳJY2TD!h1 ǍFrm`TM:&䞇:_i㋷џG R]?sZ} g>hc}`<=k>H)F3*>ˏԱ4py&EMlΒ@ G 2}`Ё&%K}HcAXI yD*PAL! &Fnn,{~L%Iqr EvS 4?3oM> Oy+xӧ *ǟqN u{ߟEZ]i2)EizH@S!E45>sM%Qy?uo޶ 4{kж:n;O5e]l^nklCknM`~홟G8;lN5tn|``t}5/\-NFGfr},K)\Ϳ_KkuFD2 }u3'ѓk|/5EZ= ǴhUyfy{FS@-[cYr=d- E'" W"fyF#5”5A{qA: )YKE$jV&vg-q8`ho6_{($J_,2S$F|Sh&K֮4b+ ևo_N-o,PSP(2Dj=Pj2$(Μ󬕑@)VUWQ(˕1̧8 &E.ʀ JR! ipde]AIS+ה,4, ^:4T|e0]uVQ2)`)I)6!(jg[fV:.)W ӛq1VM9)!)(9hR&h&dSVm,HiU%G~еW=גy.U&7LӁtB \AR\c6xykT'AK%JF0/juvP|Q|N;q/rܶ:1n%'\dbdbN{,2Jc_a,2\Z32o;҂5|~{XE9S9!w$Vvԑ#+$ۓ:ɵ%>kU- gqpx?΁[wm- nA}'-r6E"xeT#];Dp !N̔ЧRp麪%2b`&_kJяg\v|cQ/8Ռ-ypŔ(f2|XۼL?//_mTˢUv|èfLО[.#Dt3Z*)+V:fNitUWS1W(Go_Q*쭹:O7Q?~Ef V+3 ,9'Y!ifRcG'eHMAS4'"HDMSJ `J%IYN%ٸ@_Rޛ)]*H1\.;a]aG8s#S2?j7hc?߾e_h޾|/%jJd:2 =^F(RFIfW5$N@QDkWqUmd-Rd8MO7W‹Br e(g@^Bett+nt-i}+ּ.ߓc6mQt(#3F ^H1Ή3Ӯ*#jbWQ>hr~^ b[WGMfmS\.iPۇmɇC\ @k)lHeʂg ufi%Oފ,P>Sdܵ#4I!H4}K\>ax.Lrh9 !bC2`l r̿Z 2fŖzQ 32 IN1PsԒL`J&g_fy )W ל=|`ʟܮcZbm2tKgI /5>Ϥ}54x߽f>by8żì{ Ы+V{V5:^N4cy1u znmB=ɥֆð]@lX BV5l\[ 2-եg;&+wg tm#ak-vaf:K XE9(رX] QNzɅwf{ݛ.sb̶E*Qa8^蠶Ὄr3lS%*e:&vܢǑ;nc.TZqEG_E_m-|hmvq5&kj vbx&B1 ND񲃔ؔ 2:jȹzBrl懒~1t2eHD( H@E]T`tdЪT|g9T o\ܚ'Dl-6pz_.LL'Ʃ;INA(։Ն'gS2-\Tى`uYrOKGT6xcL;OBzHl#9yf *jQrWg?Ij2s'_(4qg{^Oh"̰%gA5a'gV`2P#؜%c9[g;bg{h2=Xֹ(x"=YGB=m299UqJWŞB)VD>JsYx*rY`Z3HeCFBn5qcOdBp6kNI7#mzz}CAL˅!fcq9ڌ_RXx)LIV\J!LAg>KgI#,YH=2 h͕3%+J`ё))0&ں.{!d[4,SEfә.x@zŲK42ά&cv~hv|>]>].޼6%|}C(zZ.YT]0ubzz :^kKu'XHN( DqnϹ`KG|2LtG4*nDHpe`x+p,K.U褕F8XN|Ã΃ t,R*HJ8 .LYZlǝqt=E@.'aټ(}ZyԎ`:_P ::;1Xőp.uXER`UD*G G0e4&jeKc59KʽUayRݭF4PT\U304z#؅ƐJ(',v!bus8t-C~1מuޯ Bk~uh+%$t`@0h+t:tAΕe.q9W6*Rix`,(3xn23Ńtp+q =wn=% |E."aQHFM:bR!{#8n`GY;k\eK\) SlTyr[YtűCG;:Iz0td3pT̍oh:U (`bI%)g2`6SMG=p"\h3kn2y+Drh{+i\ ]Iho]|H UBoFd=A/aj! /93X[ej臊jCG5kyTJq ada^| Xj-`':IO+?^!iVG_hiC"AG>Lkr7}U\`iH3Q]T\qtZ?H.)pԱa@wOPD$5riL \2(: eZ30{-#chntFL雟Dj{:6OU7FvýdjRY+7hpGjwQrzË́evG*ܼhZi{Z&V&FLB~V3s1aEgytߛ#歚;u6jyƫbh Z^zr=;])6?f_p`>܉Qo77g6W_?t V<=D;".6؜+ѣ*KM\ITݤqJIJ\UJ;*e]ҽ_fp Ai-q%r빣&`&D;I"s:;hm 7h֭Sp0Ezark4xRx.Zp$IX3Qx@+e$B9 3`n=g΃7.a肶Wj^!X|ƨF2C`Im!D \qIAp!"b`qBfͣpY:R44:⾻̰4N+X4$Opw%m8\A(r{ǞM?uYKmh$h"RXC8,.hQְT w(|꙳F2Z+' 8坒I&S੏F;a}hb&Rr&I-Y?dC;?s)B`}j!|h.|Uf/E*0Ic}N0^$w"aBoG$EcLt$%V \:"a]IA)L&MB_w+VE-6`M@ZX.ErB\ s3e0&1eh|KsLH*-f_t~-T}ST4cմ?9s.gofLJ58~k'XBr ac2|{׽1Z+A{ ׺r*Bfs$u|:W+|a1{ˏF9ҨP6_g7o|wwo.o?]yuyh`\L[4`bNͽnZJ?f|}3o{bx=JL06f=]ӲTHf3? 5M2W5Q}R],{v ͺ5Z6G]A~uC2)k_b? P~j(6#r!dk&??ʇ1f9X4 >b/6ƾXAF=iRKM^&fp#H>rqD!0q(F( lhD{hza} VpeK>xvd by8vbM(4ZBS*zb$ 4=yR"%\Dұ  bmB 5 ӛ=ͷvݺkb5XIiDykw.Hb:Rيz+שXŀ9&+O{j*ɜĨ`WWaR &uD9hkcLLnu>)UC3/8S_>X~N5f8۽I~GDžЮ^l;8H #СI2/Wy/nj''בU sϙ1\iE0Fk:<3(=: yC[gI?i<H=~5ڿVCa5p3|~ߛLv3ļR ̨#yRq s,ǔ$GɨD3 ۽L Ԑ? 0In yF^Fґ0хTm3|%nm\n:뵫:!$,JBV~/L }nT +*J#88f!L©A9 ('8t#AP1-~V(9cǾq;-Q97j 3 [q4aS',$svcz0BB2gЫgN D$KƭӠ8ާڂ_)ٚI~t~cbHp pA| B6DP@)XEAD-YǾ~.Ų ' ~bNu!8H%c%Ob~*IZ"f}Thz/Tx@%}4G%E̟e1I6Jh 'MZ]i2 ?ه.h.}9c]ޛej,{Wοk=  y,Ymc]Cle,pf~gk+z3]Bi3=x~Z"3?=.ܲ]^$ϗ.D۳Dʔ/eZ9'Œ K2~P,IYŞ!ͶC1X6B_wkqpk'*rmFx ,f1grHx+ʗJ;QɰDe,:u~?%pդ\q2T]j+JсzkB?+%O[㠹@rFϵ,wJÃ&𼦏7X]H8L0˧Q?.&}n}y5qryVk|)}%цoGOk2 0k'c͓q5"oPҴ7psoG۹HEpQBQ]Bu6nLA mj7>^ wB #@V{%wp] |)52pW8|Ffu_nuͻ3b+gj:~h]wO湢&M+J`DRr+WD5Dh\Ҫp>L?66w{N RAhǪPNS NJu'\~cJ[F]=#F^X- ù ;,?;q9dAB ->wc55!ap|Ӻ&-cc_ݭ`Mbي֤bE;$aq $T. 9K<+[lt%Zpr%; 6Ҟߴ 0mImٺ,[Wqpoppn3n o%t0 0;I\"N`@vܑ HRr xBrW dUSqWIZ]%)At+.#@`OLSqWIZ]%)u箞/2\.b@vN2*A{],]P,.'Ϋ_?E>lF#WUgf?յڟhSK~~Yv>/,K>K^=7{NRmrO I\N&iG޵6rBK쇑~Yz @cїjYO̐"%ICFLLrWDHha?!u'L<}Lq_H}Om!hhklBj VNU^1b<x'7uxH4.g h8me@Žûy]rB+b3KۛѧagTwiA[xGi!tS$Ikk҄J,'J&\Q-Sx΢@hD#~Sh)[9R)߿ꕿ(S-L&?aB \Pb" δ^)yޅ:N_8y뺶;f\%%2N%q7VeZ O/P7@ :-+xk"&DHFE+=7}7/UnUfyoKXp |zQjϗP"Izo$zpXW]w!sj.ê ޘN*́8j!]s+XF\[-#۲*mGBdD/$n nc [1:غ#u샠iDկ-`7Snt= P?]E^c[ޅR]_ʇBK`>$vUfC7jayI@.-L%ʬbN0p5@s[-kw;ik]}}Pn0 t֮K?NL[lZW6}*EO-z:$j 1 Ds Z2ڨ5gE3%C)T{*YrkCn ] -  9XnuohadahkjyYI'#%$sdv eIh\Y:xdu u2qvjcK& ~30limCeoQN˝4*wWww[ƑURU <ֿ 4_31P/[k(֛{fɆ>Ig 0ElY7<0I92j T:6q;{AM̤"/u@.TZX3m}튾ꙛq~ug&ݎᄷ%I  U8J^A-;ʜQ(UZ UNw؟Y2YEdbWʆLP `[-w99/dԨt'B TeY?Usdr L yތW]:(t:V1^ո<;pa]V-æU֨ k%mxᎥO>9阃C\DU#*2>s=: (r d!0{UP1. e -r/s`(KC*AA:e@qQd$UI`G&A{y;lu0GƮWlY4CK]Xe׫}y-LluL2zR4s(Ir$&e("A&}JGH9|,ӚXJh9݉D#d3 ՚1w(T)ײFf ^NInrˀ-{OVy)&*8i;( 5vEgYɲlt[HzQ%ײ8H A/px44UZo 4[>mȓ ]%6GtJh%q>%!ؒ"`D$G}ha&BRhHd#F\3uQ9{ai` 4dJhh@-r605Qh_ׇ LQ,)A7l9%IQ/\Rb;ax,J [)yD*ψҚh )6q L6!ԙ!3X# @jC4xM.Q$Ix ]'eZ&vʍtZT,W%cHd $"Ā&#ӜHRa' '2AyB $IddB1dEk#؄;p3sh}R歡'SR%˟LLTTv,b_(.vEy@ 4^ 6J)p$ ) `\ᕐHМ.,nR {0;4mjMO~W pDРA"Ȼ\hQ?蔤k8X#E; ,!k#'B,Nk5β4nMk?:~Jŷlձqq쌹#s1=:eb2 Q]Fz4AH[\$ rb /-ӡQpnV)sw@2D'R(?B8K{E锨G#,Jrk)vE,BqϠU3k^!vFCKy& dS)UO"b^POn !F9–^ʫŧo} M SZ…_`e^; "ނ9rR1ҥ@8 h0Q ]~V{NΑbʞY憹r=؜0c&7x3:ZVEaPA(᳜ZcxL*(NaKe%J>i}"5z{&iJ!bf_]rh}ξo~?fnׅ e VM҉`Up})#γ$*/lgcDzjc[5ѬG &Ec=J9#9j8 &yNjk5h준0U`ĥs"IRh9Bÿ́ļ? IJm jSBHv7HkNzI% Ջ`}}8Ҵ}pr^|XbTL?cg6ERXR!hPII19G2uC]~#9EK(kg6(CR`MrVRiaQCK82*[9)[rt ![W^ࣃG$+5.dzBSK+y TpBęVjeB3399m}ř;Qt -@.CELZŨrb@!gGbVVJ!" $:vH䘍!Π>RlJ3.> 'dDr)+xjgF6K|Bxg!1WL'J=6T´m@95@esZg ëG«RXE- ,WϖRJe ºd I!];([L@w‡q<:w@INjHR[mS%$~hL  $΢`1!n͇~t]ҶjkY6aL$x 9H-bC;%-cZ}hh"ǡ)GOxPf-_) Lw"+4:'3 e-} Xe+EChJEsla݆|~=%pkIymKPQX"f(zz O֛.d[J`b0ڢ!)Slo)򇥐R'H .U[{K }^c 7s<.L)aUM+W O1{D.`}.SBk@M.>0‡( d|:`&T?\fwJSB%yGWzy?zw4s0J('lL.p&g)*3آ7m\RHz!ԁ}r%'g.4Oc>;\s!|.=ukoJER3V~}/ S .ZᐳW=BW dG7V\oT9B[zuj݌gsyǛe}"8[A r˵WKjnGכiͥ'wD[5q$!Wt4 k7?"CA0: g0b^GDOn_NQY?!FmzV5Iр%_F!w}3Az#~NhoTmyc*OBC yUgauKd?˻o?oUO8S9K# ro~۞+oOi\S_o8?'Un7qhS [}Z@(p؋ WU.1-&o~,=fCs#ڈ6fmƵm>rø;vpKbCon\C.lYk#-Q{Q2*F)(2XvlqFX6 ߴ1]rgG2cז}謿v_FWW9"AI )ya#`EJ 4W G;A+vwG4aQSIZoQ'偖1`oux+eP ,F컋!Nq?L+|uK^ m)8Ϊº%BJHo= ->SH#}J]n`gKq0> Rm g}uu3կjw" u*IeM2Qgȑ"> 2 ],h%$g9~Ŗ[(N7EVWWY6@JƣچhEw\޹sC2a'e2jx^RJQXrgLQcvpVu\# D兏^"QmbD @R3"uɭp?M2;:gCJKG6_;|7rV:S0$ n  Y[/R":n1pj䓴< &2k[L'Sڰ9Fs§1v%J㖲a)&㸒,qrOUT҅]7x8KpYx}i'yLNlJ>Uﭩ9&g٩V^E'AXDGLYM2 HH23ZJ%FQ3ZsRTOr`J([iu*֌^3vU:ӅBXb /$6=xIz6$M8>~|4# '霕MYd47 )ҘHlŁv"v!*!MǓF'6uBeWضl+@%QI'gƎabbltSk_eBkߗ/:IبN)>o \ ,]ddMT L[%=c;ՇXRF[ dM495,gO##:"\Ljr!t;g>lE¥wE#v6>eQ׈+*/٘"z\IT&zQA'ur2d AK!ɀ-RL1y KV:M>%Y.b:v3q6wH#I3\lW/bWF$Je+QHQ,F"BĚ)>`S)DS,zq+zqw,F}X-Z+eɔN5r#I'famAp}SE?З+kgjSqWJqo>Waۯ_x~}5E޾g/D'Nn]G<?R 0q*o!c^¯=YM2*yݹ.\)ڤ)j PUZ?S&\ͬ;5^YC~Wf[!lb;bjWuﭚ/ |nlP󕒫x4ZQjRgdovmz+_Sqlvݿ+.E7 Z.\/^@cžZn+i\RX[C:d|>]to\kߦf;/ddml̹{۽{}lwDR<۠OT*iXK@HV9 $u{Z:[ 'PL*Z)]FA"Z@Q&C(;;gC}CjNǣ/d? ܯOGB~aahSnpV Ld{GhrId*$o"5^X/d XTҔc@K(E 2PMDhR`=R"X|RPF'OD؝~߉?\52{v~*nbfJ.+z,Ki$L+|zG5&e"P&HдE=;K] wŽo. 3`d4PϲƖ#'e(/b azʭ[ ) fnM$S& F@%y/LU$ Hv%\tI^ O{Y4Byَ&BG,Z9#E)UqΕ6٥b_LPM " NJIJp)+Ε+*|ݠ~iz@7x#l"1I_B z^9Ăgݕ *^-҅B)ծ&a~髑(xJS!)'a&+uɢ0 [U,u>%eJITc;44KQ#'k/Q&;EJA;܁I/"<s0-]x]0P 74|hސRq&ɏtũy/VBhٓuL"%e|EȩM*@RF"dXL0 2kd+DO,zduS!} !6mÄALM2"%4`7ن\F׶I.=:z"tvk舗hξ[/ o"tL! 6Dl22FEGGIT'摞C@AOIqza#fsiE @N&w \ŖI׾coCnsƏ-oߕtn6X{DPPԻR^y<£b@):ᛚZJFhrz? q) <S69&$-Apl(slꆥ>`~09ٔ38n/ˬDS(1;q`&PD.2Ԣ*rO`:q\ 6K?7  KCtoNrc&]Ƙh36X)zkj^m,cq!Ǭ䝣ow=Xj5&D ˗} ^sh߁o{BoN4 aVMv&@B(^_"61DՈŸ&g|Uec[5hVM &KdCQhkZ$):`! NI hTZ4ʢ"t0IDQ2hUβHk`ԙ8M-KO>o&<2O7g[.0>|viLhCE>O}.~N^/Fs'5wu K *U65uKzwn0YO~*YG#&U٤bePr\t!fqML߾tBY&3mhJg$QR33my/3%іpdg&TYx 8LIRQa%.:Y2ڹ%ffɐ͛YuȘI'EFM306@hlu:wjWlphw]W]yJ^™knYҤ cBqP z1Ζ@.@LWV_xFJNvd?dYdukF%R! ւ2RֹLZQ@IۉQ5e]+wώi0^(g% #8\-ʩl-9`{뫢BcbM2scHj:&!>߳W].;˭x?O[wxZ c'Ց{ҳ1P}BI!P{bYN{a2x|7<٣clz+WJ(Cb 6k  s?'ۯ*Em2A vc':V{{W ޼̛_j_$׬{iR 6x8%P9>=Ӽ`V[wݯ;ߝM]gwò\ړhXg}o8x :L:siZ]-y)N57?'bQ܃J>/:zrtÁ^ڪ[]mR <ȋ<NfGAW[rCO+:Ms{ m vaI#/i7g/M'U!12uj ۝H^;F&M>&fp2Ҫ  0v`;|؟ ?{H\0߂lQ| 0$E%bAHd#Oղ%%bSzA۝*F2kI7NZcO/u _.M]u͗`k336<7AXg;YϲȞx<0r80r80rbM+K[YRp_JG<%i|`2 #7)2,w)a7F\fls&<̝d#]5qv4>oS0>/kzB2.)[ϭ7yr@f3uk3N'idV0yyLh!suPa6֮[K 5]L DKӤ 2NJN믧ba6O/v?h6z9%h}6gɥ99k^R]v׶TΜ^SK~**o?[ c9ZeE*VDŽ6R吒R6|֑#3YbLsFmMF!RhU[YrAГbFchIsDDَJ5,63BW ];f7F3Vc4:]]A"F`9H: P+!(.F Hâ"n`\ OVfP= lJmɤ1RN2Α[K#\R1O͎=Q{9_ 7![گ F嬧߈(a!& J=nX[`Ґ&R/dkHΨE6j"00+ddَRWTyeD="1VXRgirGQlU8s p C "rWWw3 nH^]рgh!餹H4%E}CD:.Nnq,Ee\=.Y7JL@ jVDĈ~h=c6B*&b Aez\<.\iv싇2vtMfY;!;c33whwpQr؁\|Tpsyv ވcI6ƪLxC }WxU Ue'( Ҙ!d^c9AH3立zCB鮉w4mV+M|,*{3 < s@tIҠ B3\s ޡQ唼"/6(h«xk/#Dz2 ^E‡'e|?,ÚTDTTђ?5? K5sހAlfƸ1k='wa@xYg3W8;Ldk)ҩϞ1 khS׳E͆W;ډ'ZMmR|=ۤ ڊJ!s޻uR(}2O tg|:{D8"Mz`x`\ˡz&1Y؄Gm<}֖Q5h Vn K̐cUL%)V^к#W0J}8ہa08wޭ?-vJL>^V{;v5]3mpI\*&VzQrGW*KU^qb֥d.״/ST쁧'Kg-ҐGDLJJG ԟils4`B ^RG*S躏2/p{f8U&8t!2$L<;D6Zqih帡ߒ_V}\c/*Xfw5C!Cۄ!RT[ \xM.d%T}]\_83ΜdZeMRX; RrȄ7)bwV&J$DTU`Lf~:8\<< 7Das3z w̔&E6eG-Q+5x p=㬠kx`EКSp~wQ=ԫΧIV1lϝdp!j_/'Jպl˫z#v#,lO:eĽgIGέ6X"̸rΪd{X'NO\sr|YɸWωHRkQ>'i/1EPh YҊsj+30ff7'S8Huyc?PJ=}<3sY'W<uG 7n]HF޴??V7&JO&ӣ-}O>I`1-Ï"y74q%;헿C/%8r#I? 7@bϣA%S9%RPVʹ4$\jY0='-8`R8%q9EZOh@ V9-<#"rgWE\{6j,I :\)yƾF O%Sr*%Ukt7*=x?X揭q>«LW=dOd^Np03`AJY\,}{q;=] ہv~aΊ 27ox֦w>+s]>*4%hr /~`-Iy?0rn=s8Y{u P_]z\ x˧{+]|xr9*doIZ:5Jd76%s7U'Zʹ$|Y7 eG]L#`\a}C.;'baڨn\{BUl"6fveš8uG R\hY0,{Y[*;rΐAx\4:r:εs$ GrH!JxqX* (p"m6f$RM=D6Kw9든IHQ`O}(Ԣ>8k_xHG[X'DnG؜9:pnzg2R?,Y KiR X C "21jy!2p gUX !t/PVXن Ӧdoˡ6HiZϭlߓb>Bֵq_\Mgs}3S~kh SOA37ADneѥO=< ϔy^g%сx^癑yhD-3'D+Jhё+F'%n^8D٬򖠃*mr: B"&vJZ<8T%oq*-gF.XJ<ΝL*\qSTn枵.HLWm2 l6]|A399C#lM9Zs0Sf)+ FDR1iOH) kjIj!%d r*HF IY@DރXIq!e)ZGȷ"g;ż.&cr ~kɤ*%cuq.~xUu{Cˮw*[K KH/YLla?[K"lRk9# ؄xiHH YioM! Sv,9@֍e9;惹 AIR+YXfIQHd"PRhBv[YP*g06I >{H MJCk+,D|BKd%t/ݕK[)tXT>Zzja7:$', YLAU%C)@ *dOh@D,2v@2vFX#~K#toڪ ΠL@FO?8Ic1G祅7ECZ.D;t|F{CN 9j$ e| f,HFR^J#4LF`I5ms)F*@0K& jmiɟ0-2jF6&1+ЌNno6.^+C> rdDzK>1ηgCHtR)hd8gBG!8My/c%qDdsRF\()Rcrj~X2J݀̏7^|džz 9 ІDmag29;6f; AZrs`^LJ{1ո+Az9Px5cxll`zsJ\ >0}1`lh<#oD%%%w"ּ9`?T4Ĥ\,Q;+kE:l"5Ezɋ@3ƔК,spE{₊lZn'>xIB3rvQeRxׁXbw4J5b3K K0<) ]WE ޏ=`,% 6~% $XHTFyM&'c#VlT::Oz؊.@uk8Ѿ p}䎿ǭqw9=?r:S%Uz}S1OAVvg[zlb5>)jRh(5y6`v:Ma} P=^sKMsMjwxUM J\((,ZAو!%1^*5pCB믓zM7;z1^ WI` :Hњ 3'KVLZg8AݞJNWPQH^訋`}'A'LACqQጎhJ<}?D)+9T9ڐJ@^d" l@ AыuMj`$@Z">w ImI$*P pRAP) G]mo\~>h.ϿqA>t1䅫E:;[sՕX="Gt>}hQ3bM2cYRQiDw3{\VOzSa=yC>G[l THB|s oUJ(|s:I/oO$tgnU`H""H5Ճq/z\B_2wE4= a\g1Sє>4dJ߯u=w]U^~)}M Ԋ\Ԃ˩.x8/gﯠT9>1eѢyu3?{wO~pq~upv1Lמik^s1M83;ŝ{ݻ X;6tn~~BיWLYx*֝|6|Y-j|Z;ͣ&nus5ʩ%׃ ZH0uv92'wXԈ Z,:OG8>=fvo?w?~?^Hݟ߽x9a뻖d;,>3K/R~4e<_Zݐqĕ>]T@zPyW3,#\vvY΂_v#tVC+w떳^.㆖S^3_oOrBNݞ|_y-"j˹+΅3fzQ"nP"o5h f E>!FGmkc$f.P47$Ph|MKjwG6)-XTKE`eԒW$|c,#fhf~S:GfZ]dDo-̙OW6JN?Ueoo6})9U䧣_1O˂JI2EK;19ҊTly@A#E#T^^ A D)TsFPrJ¹"=e*=hoŻ\nEYY;*Eu\ "XJN% Jhʨd5l]1LSN_]:Y_9#y<6wwoSTClL?vܬo\P$FU {Jh;1R$~hzWXޗk6 ♝AU=4fCƢH$ %_h0ՃL:U(.a2IƨA4<GT?U~PAwg/^30gfٖO9焐e 0, }Eo& w0B MhFfw-CCۈz(jmٍvW)v'^jī/,|}jG+:H լAD9FwlVRZ!%XRY)uF+̇Yy?nlo[/"b^ܔ/1wGaǎ{&aA^H%֍3y+8=\`p$-s2."Y27 3nwe|&|LCx0H%crV[i+zSfHxRhjR"Yo]tFPD ':|lC>lkp@ aGFURZd+HK)XTQ6ٱ"p9TKB=Qzu!Ռ7!u|)|N q1T \UXdAiϵz)yce΁e1V[5tRyF`jHV{MUq/kGJizk c9&PhDQy":e TȢ9-UFl`e1M>4= #]Ƭ~UJt1ɽBO4̪ϣO4s~bFoF͋9ۻWx7~>M׀J} lF=!cͺɼo 24nޕq$ٿ]J#"~3Ǝe'Ebݤ%y0}#o};K)*UVYSq659)6zPK3h6J;IuIe|VdJҙ8*-tiZc(hU)M_HGxtӧnGI >ǟU\cؿ?NhHP0 ݓ_| QvX3Uيˑx3&t$tK4x"s:n( 1tF<7F^8Zn.{?~ߎq/g~j2O`&g{pD~7z0)D{=j8جYc+ݠuɋ:_?zgtW#@Bx?S=ףF4PG^>}Իc[TRoCEaGs~|Ziß/C9Jz:^m=g]`U+UA xtUPҕ`VsZ<"SR^!]Q𩑟]|5L ]:]:zt>+]բfœ*(u^#]уGgDW0*p٨v1vv5ҕq8#*+`A+N^]ЩWIWbnzn T.Ma}7]69}`cx0QHyCgl8oqq VԂ[!f-j3S=7˭6n׶[2*ŀ/Oع2[;s8`[m7z6ozÏwwކ,CJw\fK";Ybl jj+6FJpV5ܔ . f̜.L.pў.h.qꋠA3ÉNIhiӇU>v} D h*Hü9 A5&ghbvFl&p:?4S-0M)Mұ=99w,,U K{i R#t 2H~Vs<Xaթ**Y`odRMyzh?(}>%θh$$ +I̖q_]_Gj3g q*8PtrA44ZhSNgGgJv:s+|{ՙ[!ړ K}Ef8ĬUJՁ'4$+V#@Z%ZêLd3 QcdU^HX Ge, άg)e۩Ow7w?ՊO.ch5hz{;ZCNlT]03:1^=^IWi|x3Z6$#oE߀8|BѰg7fݧc'>& M&J"ly] 蔆h92 *əv<'>=`, ;6ҷe)uQ9m,S2!3PD҅) ^YwV^wayx׿_(asnHArNc;U,n`D}+Z˩7Y*%y yFL ͂?$dtbG"+ bDU})$ZsE HolO }Ѳ}DW)ePd,z%/8g3"*cd`@њhY)$6,4 ]P':&kCm|&;B]VyLxd:;2cykPnWa4%U7lФM !@lʑ ڷ>˜x!BΚN) ;yqto*+&Ӂz@YfPoy|A!?A5fi6>x\t/H$+\ $8LgF M$l޲jSCYĽsIh,c,} ୍(cR*R5_yV >^m,i:a;Nxg8t St0lh=S>ҦcV#?u$SG4L 7i"ZjF MU*$̉ĐM$ 0x NSI$TP,[3I1m|}p^csRugnkbk{K?t` ֤ 1DZ)gyF|:ge4`W(5ZT-K[7Q7I>$UXLk?~?Hs Ni\-ԻQ >H{jW4&Ē.ŷ⧩[>YGۏϷNZZ3MfO4#ّ͋e|\,j[>&-|f?ʱ([/oK֕%0f|U}9IͨLewۼ`ޑdE;vΗIk_V)iPZ g=A_!Saj-#ʌ;Fq[aXhU9a9DBd0fh..B* )'ۺ151b 5j>H*)e2!P$MBT; q-8Q{a3vSf%̋ŎWF AHŇ1Kg- @|@b"1^xt"Jֱ-|(=Pt _S;F 9*xʡAH<(2zs)YD!R#Ks2Xˇ5vk.=7t=>ڰjS6w՗.Who rx˳QZ_Z7֫/Oy;\siA͵_ݛeC-'{_vU>7op幖aﯸ!{7Z㋯ptS?@^xj۟44a9dX@W(!{ubs~J_TpR6tq XXMC_$$u U÷8v,ًs'3szR!- X>F YL;x4Mtsp> z+.4OaFR%җ9W҉xl\tñD5Do%|~ɛ߽}ɿ7N0Q'}}7 f`\JYߚ_3bb} Sg[L-/feՙlq/#x mi!|g_AуA͝Ja\0s7\ c ~XK" (=-9k5awS_G|vΓ;EuvEN8AEUԩJ;T*$m?||5Z Hï^_l.Qp5gr(Nװ,vs 6W;l.Pfgqq{5sd?) s:YЂ{9q>`i&h@V_pEʼN3#BXYL騟1тFQ4`pHnб5vOqbSڥ( p`k9tnŧAN<}w^n+K:r}z)+-Fict0"'q&H5= eQ!r%0t@!XmJou[D/Q9j Bҷ XH>ՆsQc }QaqR"!!9 QMʡHZ<0FN>EzW9hYh0Uk57Kw08C5(E Ntҥq (R/rO;3L|=!׆g[pkqx d|)0h8it/cG$GM)( SqGe\egf sC+| ! Wj*8||e5 'MKֈS$\!:M:Ҫ8J5`{ դ ެf&!V}&}./. ߻Mz\GH9 `n+B +KQIB7%IjIF$/:!H-m/:!u ")ۥsFyxeU_]w) =2 ]d[\G:RVUGf0 )F56(&K1*W$shnyx{d'&0cM5Ypqp_,[twK(Y0AO"NY':f96f ;hx"-wMH;MKwJ-܋?'AP!sF9ɱ80gjeK齶Š c:Wv@/IuҚlld9>\'d-;U{\ꎡ"vaOFL¼*ƝU<[ULԟ-@g"8a=Yf _ *]ݫbu &dЃaW"fAߣP4 `D\OD]XcWbz[{X^Bf]W킶 Nlr׾.z'bTso"W%)!I*3G0*GMmz%"th7TịU6yh`T(h&lrD`p'E2- ^g}S!XyloWWCy] 6EYTޮ,˧,)tnYV'i2OJ300/ n<)|Jǒϯヲrjr|}ޚeH>?;l0Ci s9 ۱ێ[JYΠd E@8+|lp!Ho37*d;xUUA>Hh)ZU۶\}X³EjH ln1rQ+}\ag%t0u0{QE3|O/"Yˠ s?aҘ݌/'sVABO*0Q{Waro*qp 3+Jd*1GvP[mYn*q:zpؖz D \%vJrpgWİ إho-yc0qIYWR=%'eO.o_xUܫeky cG!?o~MӲ˩{:bCS5b|-{Ӊa?ZebW}iV>ɾ=L'. `9´-`X2*^{?wΌRT!LTy-\`:BmMD{>Y+^9gEǸO2Ē=>+aa7#yErw6%v%!_}w|c` 8?;{[݋2͑K&}:rN 37m9'n9\n}}2a"UbWqR$v;r~p)Krir 3ɓXÂ#wvE0E^܏ %nS`8j6o~-swfjdm;ۀW|/8´ƲدR9w`Kz0CS쨸a‘Uxʤe q~Z[71(Ϫp,aF>bĺzy&rvVhIn.z0O޼:]YEb}Qdop)GR`zٙ8\ޟ3Or =KȎal!cgm/@z"ӒSRXqGo2K^R`47)$LS Է5*CzМu@)qA2?9Hysq->cPtv$Ґ}%[ٍ/?lȆ0ʦ.ݧ^Pd(;̲߆I.p{7&՗ѧzT,wa>hrf0_7`A_[N6m R {jWlEy ?suӶSӸSi3si չ+}>Ai82🔆9e,h=84S B x/8"e^Ҷ=FڶbS*NviP$x\/)5}e)uD@Y#1:`crLH580 ~#2UrpLi;DxDH\q 0<0`,PVG#QGXHT8S4 ,9飒^ʮK; FUerSWci*WWIrD!m"ba'v(qa1<. ,42+F+'3AO-tհs^!yHImB\Z%"¨"A9h:$YC, s!O+ȓuIOut8O{= F-o!ZGCD)8jek,`V}k~ɿ[_6qWHxBmKmxZU$A/OeƷC`0I]m\Včb5n!{YϳO|7LNq0yht<)LD*eO8T2=\egf䇥3 c! _Y`epj3!۪ٔޤ%Q)+ه鸈ʮ gq!*e'ZW/Zy W 1+|"#I$I-IՈRG6wEQG5dn!ܢ[bSD=1eˊJU|?^M5SK^e+NPybRVP5K~f48 m/nsUDcN~T塔" Ff,^<8)iS,mT%ۢ"Q2|)Ǖx/,,Wo % BB4/+89*^%wp5w,FkęQl~E$L2,Är )'wmGCqWI!+S$ábˆ_Ґ5{GfwMOUSTNZA{hzcƿ㢸PgX[|\u}ƍhM !cvA)Ha|;a*AfXG~'f06+Ş,_ ټǪԥeDDBpM4p׏-غ]^-tY/Ͽ&Zi;+`Hӑ ޝ2,RU&W/Sz5g 5Uƅқkb|CeMZ Y$12f?YAU=;\2(3ު4祚"ENnMCujѺn}!h#pw W=W{su~&w;"펠LxdmTw\8"NYҧ}0. s(&5ĈAuBY'͏kC(%_߆ЖvExI?Ow<0B;g {#0Y-&1p0H﵍,*163% t0?qk-2ۚn,Ef֑ތ U}[ƅG?VMpPydN_f [ ~+^VǛ 2Včq#hCmO_RNqc5`ձӕބ0cz>@Uq2\(3,V{ᄏF1QccE"zp=J,= HA ᑊZYID;Ǵ/ p1^,/OCNS-w|V,Nok^r>\1qi!8%$ Z1A> QqJ`La&đ f+uLg$^qN g$K8!CHa µt8( ) 2ॕaΘ2rҩvW)t4D{ Mq`c*h%{O[$ˌ9A [ƙ Y\1.ӾDތk<;0o_yu U7jPug PuQk b:Lx*.hxuyVUxp=+S>Rhg y1Myf0'=Rk`>A:`"GLGL-gn½y͑,IGF(*lAv9Q4SLR8=T+6^iyA)ıK"!'paޖ8qܦٸ`=9N@8$I"R5-W󟛷o@W_fJ`)`(2mKpRJPmP"9RDBH\2=U[A:Rb&y$IVi,H,<3 z 65e[iM-w݇ d)4YP4id7(X89J D K\+`)6 Y=6!zoG#2\*밊LTK G`252ro4rgFPT\U3XDR=qTBHcH<;r1J:;"Nv9l훎UNhcܯ m`P|p O2;tz!2vil#( ,eOC}Td ժ<Ͽ2<nbk+Í &=:%Qp*fԙ弔 $ oM Ի.w/^+ȫF芔op3=E`e D~; Owa=IZ~e4-Ӥ$sT vQH& ~ ݒ83s^ \HS CZ]ceKZe'Ϻef(SA16:UO*(zeguUX7Pno\Nڷa^g@{QIK{{Pv50==6F3Bx2I^"ޜm+:겗:EǸή{qՂSc,ħ,:2Ω{S'}wV&5}O{F+u8ިX|_u7\8鰏mmã4A0ɵ r Bܲ3f4j|sWL#,8ZI}G\\i?|^n gn (Vp)K >s'򧡓>mzpa@ed׸ev~0`E>ncÍ䣽5{sgDž?>O,h3ϖC7[@*<Ҫ-f3}&T1DޠK9}j"g4`<2 ,%&gH-hqtagۃ"G꠴2K" DK=7a $D$a!z}=< vyƔn?2ͮ>NZ9zpp#IƤFvfRq"v0Kwҥ;9t'Dh$))t'QKL̵LRáSLclXOD%C0!B*RX1c2b=6MVHʞdؚ8[+"=܍G&S6fWZE|Iˋu&DXMfE?lҍIW\+Z܊S$ 9u{ Q_>Ot؛Rojuz7W^m9ޢ慒!e,i~0nwNwo膊+Yx5ggY?~]t3EY@Gx3PY:|sʨTaOJX&l+[5!1XiU"q%&^E&GoI_ծga w_RMpFsjÑDF8|(*ϱfȣ)ŕ2HsMk77΃7.0tA}ikYx Y&u4 q-5!+ :DD,c4N,y*}TxGdJ*9$803,*͸J&VHF2Z+' 8坒I&S੏F;a±~hb ]Cj:zwtpБڇ\kA g 'i{?eHr\ %UF 6$H{b`ָP2|l\Y_/\;E)BG|H76@REynRF1SҌ ?q[&z4͘,CsţM0W\&s(UhgJ);ooՅEp0 \ko0WeR=ۅ@iqF g;W=Ԓ¨%n馩 m~Eey pTQЃL>mouhco~ȦVSI]PHj0TmW쥈)>QqoKrR:񯲲s5|}}߽|KL?/߾y ߠ%FEm{X>} Wfnѵ^~z%fu1OR!-# ?(M{[^ۿ1MC{uT-Vͧ`vu2)_ ! 0n@).}Zkxk}e48BBU̖T6Q4+5}e~ٴ0,NT`#)ξɯp 88E6FF4ҁHa4>+8 2G¬0O!9>4;.B5jRc#I\L# %^R˕ HI$ @"T+Pyގc#R`mA rVK$v##agy*1`y[1XގB]>#*1*Uf&$IQF)3ٻ6rWTy: @:Op>LMd˹M[-_dr˖v%)  urF)xLb1HW:z*u |{1>tæ?Ӌ#]Yho0]lGBH 1єh>RMZC/uѤdK]R=d08[TPQG%!:Ӓ]..vaߨ(51) THHEHA؊Zzl*A +]܋]<y ځ Xg(*dI6XG4M\cInz8d&kL~e90̕j>QUv'nz qGdGcD\VGc>ts^)Gs zCN69sXU졛&%\BsH_+݅*w w_Vbu}D;ɯy,ws*gt{ȽO*6ww30 ex4IK*#4_O/cig߯R8mP==ܨ>K/Ǣiy*Sf@VAH]^O#_weWT"sT[e=!Vj&Dc2\ZX,gõtJ< R[Q:>"ɮWǖUSLV=BLn' l@&r 6)q 4FHH,&ca&.ñ+jRDFsz \NR\NŬ\~h>$B J}gNg1Ư^] ZwsSs+'q9z$2+~}>ZؓLVEe,hc@^ҨMP`d+ wI#@ޕV=Ѱ\ᬩQӡc֘8i*쪎b3,b Q%;[& J8sa}GVK2[rW.?\(䓾3[ٱRp2cYVZ@, Ɗ+R1@-7>A*g4D8d<ۖ:ej.HNM$EdqcK6z`7y0q!22kuϦӏLqeVV&$#}%qٰY^,#z{ ^i{6L Eh|)E,X x]5TzV=\Q.&vסm<58A )Zb_eNgVyhٲd]rH3Ž 'P1` UMhRZJ[]hȔi#M  ]P`* )&M5TDx$g%*Y-+fCd8KʝU]KWCcRIJ-)'afU+fb4AJ.ꔒRV8h]o%ګަX8k)֡_@0hܽr jCYη_":uhXjǁqa .:@ ,j3%%xA?yd'dkQ\|eUȩ*[L֒1|U|BStNF!X"L,)tx0[)6'貸hڋO"š%& O|mtҰ_̫̂GZK'cuϙr}%GwԮش8!tCGV,dM-$7:(&r>r@IX ] )"FVD_hDIOAIn-x3"vPkHqh~ix-*-\f;lܠg78D ŁG,;G !ӵjj3+2y[Z,+u֕jWWڷ?**YLgmȓoRڵ5 N&>yȺG?6~:=vV#ŻwdPv3˙} /ouE.,[>.6 _ng?ȱ!S'-jzw0eblW*Qmo~z/PԻPj?T+t2:(vњ9 y0^8&nOYȅB޸ΰ1Ԥ] RŘdlQA` rr)gyY)5gAh`BUQ(PdM JgsLlOr A.tmbj ,-)v<{MY;??wjf>W^1+2PAllK.9Le&#Cَwɻ@o?Iz1x{گ6Bb7P썀=vo^ȿq7?_6ua*\t0=x5cĮk5:*v1Dө*ߣK*^S~.۫gjb(.QD䨬s9HAUIJfbgʊSlVw3 XiI&]`&3 6_GBЇ`_hEm_Q:_qR=,~C ;?zV߉e*t _ #8NyӁgq*B)^پi~;/ :Ce7.5uI9X6xP%VŢ /%RIs9Ymʙ}նfj lct>&~:8[b'u]x?6{b|6[>5׷jI_CcvY>wM؍ >Xf7tX3ݾk[ hF8ٟzCf'"O# Һ[nv6tzHc|nJNZv mݾۇno|lJˇWNջ6:/h{s׈{  n%(Wgb.\y:#3(XXT\P˄bc~e~$Ms D ;:APo jJ\ɷW?v 'hnkW_{~ TJMX\AUAeU5卣RF8 ǃw#]=ӝL"i"xI&)RW18P~s]Te"@seb3ֵAMպwx&;iPoZE gnsVA8ͻ(iD(dFK66b%|ktH;N!)C1@zTQcWJ`q+Ш;WIyG:ls '`]j'Ұigy7 } GϦeroq1x5y'iZ77B-"\Nξy$"d򷺈"Q ߵlݐJ{2r/޿gO<-gſf-N43ЂSh2'wjNlV>鏓٥'e'9Ud&:$J E}ዻp-n]s7We4\w^9N]uEٺ) Okd8bǕáH=58* LF8 Df:\GL>|t~CaQ~_%7oКK%sH3лaߩ6yצS\+Np ӣ(UsBżxV3Z]IsaU8eɵ( 'f9+"dя:0@%Mږ@\ҧuͰZe!5FQ+x4j`vS瀯m[]tr]*!AF\̑ܰ@xsէW*F|x~qrPD׼h~ _~w˿?zehoGY9fZăI\3b>VgQZO?wjxi#8btx1Q(ܢoPb ˕w#+E\f6K.i%O2f:w4nMk#z!=Ei~]ٲ8US*(s4q!1?Zw0r:$xbD Zf_kS[0Os.Ҿt qcIJpQ`fW7J44(f.D}u{7׎,ۨ!vӜoxPgcyS#BR0( bTGQ9S!Tgu뾊xbY-BVa^9] >Ħyr<V,;} [=V=|uo."_Y9JZ'ADJι+YRCل2ݤU勌tT#/د._~ }{_^v spuW ®.DPᑽmvX>ͽ?Z@2RJ41Vj9#I/eLgtη͈8oDK(5Q+4.esƢ&iEmaw5]cW6YP'3zk { n%|6=#! L:M8KU6'iOJ'J $nC ʰlD%΀d|Is,he'b\vt"8m'1l'j<֡P,Ala_Jа3-1 q**~ DB1:!QVǠbM@^(9*#9:$NBOkp9,ҕX[JX|9#c{JkXgle,4=>*^S4;͒w R}|`(p8}8f @ x#QZe"Q!<8A91pE@:Fh=k!ol`gaK%8 nF&J-rnGl7-涠vkܱ!js.j7Kk8^t\JF[_*PC28Q Pp9(h*&!N CF1RGXtz"^.$MuTֶs;֛E ۂǦH[FD#b7:2M< AkwrzcI I6b(ʽ6@R5鍦H78V/ՠ3.8!(Q{;i* 9߶pk!IWELmu%"kY=.޼7AX )5#R1= V1DG9!ofxbN?7֏Od-4BH࡜M'Ԃ1$b;2J$F (̊b1p9i!fvG3I͒V$cVƐ=+aJC#MCF2/7lC O J3eUK9Tb:iHsաZy- IThĥTL$А!vPmq&prr&e8 G9K65rn G]4ҳ %yrKX؁%nO=ٵiNgtV.LrZ5ébS,۽h5W?ɯv2k8/,)]j}u'P$7Y]TVIkIF4n^\va:^^je5h58P/{k-2A%Lk*y륶̋j;׳ ůϟ;8{ _?jZuOHvp"~$nZ.=һjӁyW!Z=]7oIP 5;˕K<Ρp}B&Pl'x Sw$4Us(j+o[Xz`tp֠{#x/@ PSv2,6ySD:m~Ŷi޼ӮeeӟhCkq64#X9`]ċۏ8@KZ(c ^H}k(RlDI5ZD[TzhH1l=Z f/x<فR2 ikKg4הR-́>[B5zv­ǪGQʎWISyMk# Q̣a>_P W?G?Y&2W7U9q.UV־i¡C694>\*~èL‘aTYz΋2I8R[)pFYd_ ܡUQNVjZQE[#!޻n6E/z/;_<|Xv{W T ~?/Ob^^sCkE-`pt:໇ ?}_5H{5빶nuX|) qyZ4sWo֟07zrqn{z^I+C}Z`JЄ:FZ:peh@//ںz{҇nDKaF.B4AuƏrm7^&[k:AP},/\K9qBx@wth=(w3Fte=Gjz.@ _yd^-ϦZ+5hy8)}kBmlXicYqRQDm(H+$3J lTFK"(Ђ1,N>up;931eY"dU"Y՜AaY!ӟp!Ĭ]@f6!9uJP< KKK\c@ o+; d5.Yoj3!j%xccGQlk B} EܓB}=žX]"Թ8 bLT Hq@EzwIqE?k[_5L%'ՠ}&YPgŖNPxKGWW>HeɌGS5S84C!fFK/)um$NK]&~o`3pwwkwd$?k4SDmZ+;v.+&^+O<@|hFyNW < UAT,kБp9n?w(; $wfV00J9VJ$:g:tn?R* YV*L f%9Q,Ha4Ff}-tL. Kk;>K{zy\?s8N,7Br%h@[zBH4H?ɶu, ed"WlPaVlv8DtI^hhiyֹ(l*fmd(cF+# Bd t[A;+I*l U9nth/tYhz |t aZk\xr};EWǗVZҩL hR ggJv8s'3whO8s'ZǙyhD-3'+Jhё+F'%d.{d[R< WEnә!z<($S%23[#g3Μ#B3|2\^Jt)JmN>>fzh:IUۀ >kwALW_j#y;If6|^M8=Jd+`*pRUP)is.X#w&2utDwDD{9k6pv 6N1,o!J3I=d%, Gt|)YvHȌ4(qҸG,cJ2c!YJ „b-r68ūѠoW!Q ?BI?ROxhM9g+_o7Q#ǫ/Uos$K&(fT4~VK!I0BoUVQ*+1ܧą`(Ь ED R1cPa@C2!,+YV 1I~R Mg)Ȩ@9g`2LJDt s,Zk7H Mj|CjBXǜcb:&敱!r=QeR SALHM>;AeVȍNmrg*del٦D$a ˭,gPޅY OQcd,lXddOKڛ͌F_*5ΠB@kQrJR܆7xr張SUHWARq|v]x R$\dcf2 B(f2#f"2w%82yU@{i-ZG1y!.P3#zH&7d`C^b۬Au7;fy1.R7I/BF6.:ztD4nohlFIۤAPsPn/u!, R, >'ax62UM7 y=z.m?=6 Or6E"zeL'&霹,тc\+52c#DkKue,}[nYkߟ.JzZXkzQ>Hބʕ53 X&<~}O`B5#Ňdc≯,|4~$E[bPPyM ǂd֗I @fy^yʩm+)lxa[Ex[{!ǝ}>!&f #?UӤNr$.r2%BWcJF1 eŷ^=k_' ^`p}tke[{vH념V (N!XW?ȵ+;{*Um-QBlGFȨ >T2UZihíNgj2Յ>M9[Z)Dϑl!h 'wB-%eS@z˵ll<9yc T g=8\dV{;L^D#Aa0,>=)7>CKWϔݽ6X'wvqc{QɃ˫za_u]};伝\^M{ӷ>{=PSqEsnȖkɟѪV|1]2HelRhc9 X"JCJJl#Gf8Aܪfd+g䵵%zR,$ \艮k9W@JĶ5cklR5Қ.l3vՅe]:]W]mrSqCڛ9hvgWiσÚXcG#7Fs#2t*;cp2W*BP\ ,EE &2CqvQ0MM4 #4l;FSfض\EfFC\11Ek;jmh]#*wZ X#11􎀋CP+LZA @ik%1|PV@2$^nF 3j#( Y$:[ևYF|(cшǮy;㝰,QL2伏"٪`Ns p B "r׮M,gL 8)TVрE BIs#,iKz!+mhELEzQtzӋmdBf`Ihه3hH)u^X-qǮPnp*lq]!x.5r7Ul$c-u^nxjw{U|\ΥzZAXoDƱ$+cU2\w.mK)8~SW#J]e\0$1{CH5TͨI˖OQJ "d*(^Q!KݹԶϥ~ n[<ݻqX[evCճUx3D%m\+_KȠ1PטJWT)]MKs8PzpGJVTrݾue͕ς.lQJp=ot}eNY*F.G=\$ ֜w^t'W; ,sH?҄/c-ΙBuV2|Yb^R fPESE#+Br>ۊ HeK}{cW ?9h}㭥!x}ߠHM)tȳ -*!KY T*YegFhc]9qեm/ϻ,Bަ6L˝ -OK4g122$ѰrBeI ]L$ns9 ~VN(L93d̑Nc9xցΚ3*`@r4uG#Mj_ZXa,A*@kIr\;broy/ǗھdD`%uJ"Z82$AkwQrZp;AjWv:9&~lz?k"ԉ]З~x/8u#>5W\#r&:n:&}g>Rw3{_].')aMȓɓ >5.r[2s θd䊝-%ď2 nOR'iٳoY*]>2O%62%W1Krf  ᤐϹ_Fa\{N?ޕlhd@N'WwrJV-RN㓸U)jr^c]sE׮7V\JgK(Us;}M7#Y1}SbUp1D˒k/0ӱn9+"b7:w ?\ =XKƖ@=eS3ܡUf@i8u`Ų@m/uت`[wզ*irHe##e4}61+S~{N=V*U|Mxieq/./~|O?o?_ߟ}:'NIP&M~\hzh˻!rIv~(K}JR,EW~bidgU4{ư<~347oZZآ)öz=cmumvݿo^/V!0n@/cٍ{{~'Z.xZD71[S(gEJ2<;Po=𳦍o2A˝*F2k7N|謿_FBڷk dDs`6FI`9\q4&INd>bođx EҢmXLUU\Dұ  txtK VV:#7ys_"tj_F,T9/,DD,0+CʘٯfC9}!-`-eCSo4O\%T8 y dATe)iŜ(&}!Mpy 8'#=l7~'u+!\ڥjmPӗ՚NTwKW& C YyTp|4wb0O`HEI,wJÃ&@[:&ؖ5G5ue9$`k U cJ#68R$#Y@SJxAr[݁íLӀzw6?\9|w|6d^|}l ߇,n9]|ԗsknS!ѾWa#oRjOcN)C͉gDՈS BZj+=TFRsd?'Qps&ύ@.RA;N5Nڻ8qrZ9PAAq82OJÜaqNX Z!U<@2/DD _Zߚ}l/6} f!c4)jg~G'L P$:Jy9ɍ3A*q`4PYHޡ(-+6>v  ѦQGXHT8xV~)!Jz)}80tGX@iJ?AwŬV`x1L9"E6JG1ð;lQÆcx]) /rwf7]#By5!+$<\8l qiU)l6HP*:ɤuKIqWwt8w=F.o!RGCD)8jek,S0DW}c~/c͢/mѢf.+4!8$a1Vq' &'!y O *t xQe=a)rl5\30GR[Yfc> R)T2m3@5#0:ey풶|Nj!!qgUn>lwX{1noTTTI-[Tè*Vgh4uF0I]܋E噔"*މG z ҰQU>!2\ T&E5pQv.: ҹE|z7YV1cYlJOSI}2oFResFU4l0ʰ Q) -[?~jBuvg9<1Mߪ(LBa?A$גP˴ JtIL"-+^%ZE8 ڼؔ[tNie>X} >_ZO eRXiʋ6O|Yȋ_a`[_aY"@$F=>zw^*_LӟbGe?"_=ld&8` ߽(|M `wXx2*kÓ9d.!{=^yH~3vCmLT%`̿諟h۾s92ø0̡#֊ujƨ>/!J=pxXBhr"ݛ^Ղjj]c,^mR38ū|6kFeDTVA½Ǫ@p=xkzG 5~kƸk@JљƊ?ܛp]?GUsb¹ YpW=wz5bۺϼ-+:TRA+G QnG TJ|LA-m"'' tAcyye-fݪ8/+^׼,z3!F54.-R&bRJ{| x(W"ȈAAf#8y BkPBa4hQ9b0XJBWpc<Yj?F.'czzdNQ(GrgbgeRL$ _ |1[i˸ȋ ߏ# Es>b܃E;x6#W'w?ތҧr{NB$mDFxf #XeQ,`C(Y"gbZ(K'Re;9*xPjim06»E+*z[/[e?A/ "avX`-yگVs„7:vwd&o~:a -h {Y 2eQ2({7,n̾So 픒8 Jgcamb剋d+{54ON'%wWN[h4Tht010jQ٢pɰ\̾~ANr) ML7L-2{{ݰF\Q:#;Sd{~/)fX]*la`Z'2vUɺjP!' &uD9 \thw1*3ݙz?eׯK ٖa鉓I©V$LȃZxrL]OTrx۝ω{啾Sq\7ʊ[KE'gNۅ鯹\ /RI˔+2ʼL2/seVAKXS؞ƌP)>P<;sNA9 DGRo` bH!RvX3vٴVVe 3`XKkߍ6 F0mqkCJT/i39b%r=4%j,Q9B긙=[&0JSp}0peP*Q+޴TWRe44t]}7[bΦZkd0J>ɑF޼6#oPX0LQ"#^H8NyLX͢Fp\ Ij7ưn V3D)RtJckj[FT\WR.W\qs]w1T+ 1|0p=JJpWUB+ XRr0peP*Q+оUrrWfw8p˝ F.WZLU-HJ`h5jwJTʣ9X)_do_w`U@;urDف»L{=Z=74_f>q*&Q\seׇMN 3\e*k^azQrS@sm$pA>L6 AlA> [A+U]ﵻE}Ib@:;02@Vkr!DRG#\E9 y"%RI"s0 vF'A:Rb&y$IBҎY | \KAGg G'i% M|’ ~x9)#QvFZbT'a:ttU"ˣUOL˭6iK#gT!$āvN"kׄĸIAp!"bqBsX4[I9^Sj;4dȄu;"S"it@$ɔ<fi%1"<mUc hB7~&Dj׋u4zʝ& ^Bp:@+\ q)vA3P;aek(Cg!SR"Ʉ`b|%&|%:ᬖ1 ,O3)Tn1^d??KoLR\aSTM 3?  p3*jEo0,~Z~F0kæIT%.]Bu˴rM.b֮(!fum6ۇM``,NT`#)ʾdn ġ- aȈFT:P)qoz>+8 2z4>udyG ɞwcAck%:,FLG<AJd+H:$CPBUZxbʹnH4֭omp_~ x g@8{b8ku"qN䊃gޔݪ17v[&1cgå]kQZٻl!#Mw٣uk3]Ԛȋ*Cz(%B IX4J'Sh_xgDou=A)Njw5}|s*Ɂˬā ;}a846:,NZtfEDZi="ngx: jrzcI I_bX>>13jFSBԍK|jPReNƣ5i[E3K:g7"~qqZ:CqEbuH$ I@ "^>%^1= )8dq(x:sq(x#@*l}]ӛ,kȽ&1 {1nnFXaכ>y.5lhI3GHkgARyHygz3ȏ\hZHfȒ)>Y#IAZIc!R/!KJXIBH Eb{3f'dԩݟ,Yԙk]grhӑXf"\[=}b>e,Ck,ݼ.]Kos8Q̎Wn{k6bKv{Wy|[xw孫5/>&іۃZݾ>c~/ÐV6eܗewzK^+obz̗TD>t˿*ztsT#)ic !Az6R %)yXD P%Z3+a`EJ :%ǫHMD$yPPo ɔ'dRSǡTjSᒂt6c#>:.qHvTI)3=uV?~nZydT?_,x<|TYOFG)"$C2Z0DZbS ĩv!stz`2Qf$_}*fÛ_lR;Yd%9J1֚he 9gۨt049+;ڨ#Q!v#Lr٭yp*AZ/܁kM'@KB*(U!r0mT 5,NNΤ  䨀Em]Fm vFCEan({lIs &='U5ò9uj2 }z3Q[Z!׳\ih)/At,uýbh۟9i*ǶLѻ*AvY#E۳?ݯgSWGUU\׊H-eB[(T0^q~/~|j}x|Хf_q5 (fYaFR%pUetQ̨`Nd/흧[9~-zzfCn.n[nlL10ԋ6ʛ΋hr9D~hKh2^,r$g8f#?\Yy~l5~"Kg9aWsy<^|y<8o^|$t_3~Vֵ|2>[RI[g_o& ?cǍNsڿc }L5jv߫~iNM~wFì1܎ I_?dUjJϳƿZ4kxFs[؞~^v#j>7gsA\kAS[dH-&8!vntt۸n&?<\D(1ιhIQ` ӆg| %XLilFW۰,.KGӹZR d-yVMZS y~@ߡ5QYV..8N4FCoJ+־*wogn="MS?͢.^׵Wt7j6vqm dcSF nA28];z$yG\{^ٮ [UU~ɍ| zbz> ,lK7>tgͿ/᳟͋xf?\'l+?bz&/ؾä5vTARrn8yH& GS*=q3ϚgEnSCF]ⅆⅼ-":\I2u3-7/Xx獼h۔ 5O/~~fŬ_w^5 tݭd8>zwx݇I{]m) Gcu3^VHl#.O?=E/Üs=s]~]؁k2ւAAZi9$߁>iO/1٨ ) fGp>.,İH&J" 5tH96^T}ߧf{<9/Zݛnbu+ǰ|?5_Ŏ7"๑22ŬRh4(, Ij*"R.h $X67T$52J1h#ELhR /I,= {6l{O:mkN1BY7i#Lqk #ܾ&7Mz&~XDJ*[[*mgfVs4*qTHNN['qDGR! <! A2aj õ 4wDr Fٔ ie&P(@7"Ce1.5.9³4$O h1qI)y QٻqG g5cQoq]}i;ilwX92#U|NΫ|z~PMZ Em)q`=c[h1IQM8ssB{>۬f'X A 8*N<74ȪC]Lg7AqƲ$O#( 2<37JRQk"" R0FKo|g7'ǣ9D~d#%;a:ig݅}qm=wUsH8i2.\-!Dk!ȹ;HT(cI ,!hѹB\h&6F ˩6J"2s:\)&Ξ|))ϔXH p qQXR*a)I8][oǒ+^6zw' vpNbaU&yHJlop(HJ825Ӝ鮪.@5&|Cmy:s8$Q>+58O!YJG0!!$<vMnd:# aQ\UuoݍIH ')JG iS ) f0%b.8^9'ħoed,6_QkC^߯~:k+;W8Zw܁&u^M2瞔7vw\տίq_Π9gK4kT^6ѩzLɋW$/).'uރ˱G>{VOX)Pe1o:O`jɪ_KߞHKHs'pҙ"1ɜ۳&\ϥ4͞=m@B^'m|!$5VFXB" N S5x]MlV~?4~5x8NlȬs`L %˅ + VIlHR..AH.]Ԃ' @ ?*HװS1r6ϛ <ŽkEg\<|:}VW)X[|`(| l|~v15JĞlXIjW5-3t,Ԝ^9G:bI6jw+u3A2"$74O/2S fe9uo X^/}J-b7Ȼ@|= !]N;o$٫[|tɴsC,SoIPO<[}_ӟYSCf>_,oͭSVYcU6om÷ޞL=:sw¢=x}k‹I P} -/gP0Z?~`CD;T3ebFkMhId112o i 6ou(  =̚*绪 -)~n՘}*|X@8Pe95y#QvG $"2 |fJ:6eq㼠6jY=Rǧ_9kԍx쩞d9ȁd9:vpTߵ|~\2IePł-s8t$!?v~I8c8DpQ. ^S՝jC5vNe3.,j.z}yDh-F O!,q$OD<xg;xXt<>/D% ^ID4%x#u+cg'K<2 {ʴj5YXQڋ>₮s?U-Ŭ$q~d45Q LR\Xd$`VFÄeoR۟dCr vVK-԰Y%>{N _zW&an]Z~&lCBlUsZN7ltD/):*⋤1<l鴫p/_:?Z|8ڭ /~.ſ e R2"Snq IٕJĻF_3 ^0-d1q .DxІ eF1+X#@'`m"썑aݔ ٟGVOgV~ꞡs6V{cg_.zƪNK^%kM*GcJm Gnߘ&4NF؉Fȼ\qZŰ@"1:LTZ=Q2NX<&ÓtH֡-H9 kNMM\>/-y^T{|Yw՗PgS`hO\-ns9wbo$K3/=LQ9#^Ae$}xQ<2ViC..T\EH@K&! Ghkw9Aʶa3'2_,kU=qgqR2x@Xޅ' Mh\h4i%)N0iOѲhpsRsFKa]4-jdvS !CzQ4< !{ %tx8yj܃ "(%uY ҅ Tm"i2ES) լnx]سbHCis \Tʹ}r^&eDm.53հbdZsR7-0/^uFHÍGq}xAѝJi?-XYTwAVRys2?W}u`&YSqtYEy[G -Ru.>t߻﬉_oU:9 >wzeq̇(9/|VW[YqbϿͮwVN)|9B0 H-eP%jIpxsn]Ԋs9yELJXMre b0>Ҕ%-Um (fD*LR5wbΛaV0t>\wE,-R:\sqtQcjIp~TZ޷ k.cf[]19)D={F3X_ݰģyYD4h8zqLwVo>!^^PCxg5κ-+J-xiP4  9sZ'dШa8J4Jh#Qz}(qr5|u{7 no{>^alJuj16h:⚉K@)'R'Y{ʚ3\Yk<LdR#t& IՌ(y+ZvQ[QM LNL! H4he,9ɍm.WRTs(1ZT(]@Ivs<--v]]+Ʊ& I>7R}b%}g.x(u@Q;y_QڝFqMrhuY3] v|t(vQ!еTh#Z%0xH9&=Eǁi&b{v9Fe;>~X9;8LmjjP35f?H0{0\HC/Q*G1˃9[:^q&H#)T,$ ! H '3A«[H^\8޹p/ߌF.< C@ ԃ5Q$|4rI텿|{JV~2WeZECQ&N/+-˯~l # "zPEBdIH dKI`D- ɁFEC2RDc B!x *ybY/Y/h0ݹ*$0( \"=!KnRv,x1mFTdMZ\! !TA;c@G]D6"p9 e]_zv9H*-ĺRBC%E 'Ad@43f ILO4)olP1Wd#g%Yic`iC)rЭ7YؐA\@np,zxo<Ù]zЕ8J Jaq.=@dLK*J,)Y&y.ID \he/RsULƢ e:Dò9M $)KJ,CF:+ qQZRH=D脣<"cDqi IAco>ǡx6Bǂ$Gb)$Ts.$<Ү0- T[gZ:,*hM߷!I !1E)|(a20y $%!l&ۧqWz )i[; E#[o݁uY[Ժ@N<>L|*Q+3U\P,~dCޞ䢜mNE3~nD0'f:;jF\~|0I~b.8*NP.J e.=R ٻ6nl K[l hm.6]bC[Yr$3zIGL8&99$"gg9!uYg"gŐT0&NJ$˅"F$mt wC)>l=h୍ch:A?4lͬ]_LY&|mfٴϷ⇋Fo-_h\;g9i h6ds57y%j Y朖"!myߩPL`h<-LL2Lԡ.={bsյ=.=ۻ2k p:(?B8K{E)Q)FXBo=6z9,wWXYyEWX|8A1rykB(V}7#M~7  Y?mۭO~>X\ד4i3?/WC;@*[ tؖ+| YƬ=[y)~Z>oA+]:aJ3W*J!.q0tCtD{ fIŬ6JU0HD˜NDWhe\ƻB|S+8#ŭfE዇_ٻ?:oU >Sk 'ӵ kRY3YZ>n2y]O{S9 4Ed6c+(v%v7p7{=q\`T@qBK'U%;JWxa;>^}(>Wc̫NzwhljR5JTpyBe>R9]3-@C<)2JA*MJQ\= Ow~~,Y\Rύ3Ib!) QyVRi꺠aB+ eT| oAZQࣃ1W 9W|OF5 [6hSo"3+= D =ggrr8]׽$:Kqf E"&bT'PH a8J*P)DZZ.MH #jD=2:pBF('qfkpG.['3ԟή<|3;Ob3`}h:IUۀ|KU@3G«BJCZ-OWǖ(j2p%Cyi]2e5 X|vCxW ^{=$Ov<> B`jM, &&DI @,J FfʄVZFC9>sѹw,mҶg^a+\|Fdj% mx IDBN&)i;%(FDG'5]9zŅAZhW10ӿhS@!?D2Vit&3 eZ*G!]cVP)yɺzFKגymKPQX"f9QzgN֛.d[J`b0ڢ#)Slo)ꇥR7ڝHM$W nP kϥ 'JXZZ^?֡rL1GD`R ?T`hI/4]x 0tzc0_gC.'U)x?+N ^*<%CIcKBo颩 oвƉa0l`ż@m.zMUF6:dSM}\ŐHnXu8 1گTszBaqˎJ%PB7wyů?~_寯y~7^̼=U  (]5+֧ruzanuMo޸?^\fFj_ oG,iCy bg\dqU4ǰ=y7d47kCFҮm fBE"V& 4Ņ_ݸ\l#({(lNÔ J*p8/7w=o6ƾL3\`@Hb VFb :ڲ#7>i8%[ i?,G$H1 $%O8"lYu@H'(y%Op^.8ё%,1FH@z=7X`LTĈTTX{,J{i N&T+Pc\]S}׭FZӉ9,B=o.׀5I웾>[QFEV1`wq[If#sn^-k%?WBDJι/.2SBȣWi[OA~=ehD#ݠvtg~ttK ŶCZ%<(0O'bW1K_R.lɿH5'}n>>~^WX+رiqқ6$U*XHScs圵tN̑_#Gw\5rqɣND!p.6xwZv Xvkdbo^`18-(K$F)VYYUNBkWd sG)*)ǧs>#>>xa] M;jvHIZHܱhAMp$@&{Iw 48I4#2z mmKF(z0*}oFX6[ӓ=7QIl :pM׷%urR9~)*϶Ԕ=x n?*ZG8@H"րFZb%Q6D.K*H4~"iX2:Ř'9Ѫ2&&NF,:+%4DEwJ-cklalak--Bƒ¹Dol0ۏDa|J?{ϊ;(jQHj#e2Q)r"IZpMDl<#T˦ O!;{.2a*&W:(ᕈNt J ڴ͑ۖ8-⒋y,V5j/a%d`5@Jy͘w\$~ё %8AI !s=jӪ=LRiT0~F[rŀ8BQHIFЩ\5qʩˆX,bkE-[DY"n ,F) ,Dƒސ|E˄ ͞n&Zy`V&VOnz\5gݟݕ'l}X\-nT\aW[?oθzAY:>\&!@SSyM/^8&IPK+aUQyڣ^gŗ z/z.O<;! 2 p aK)/oTTf0{<5~>jr4+m8ljoխ7jn<5'i&|1gPȪ@u¤Ցgϟgk3t, ևBsun@s}J֣J[ZRBvuUgR27G=r~>_r/M<ZQft~=-.^'7deS߸QW'ūN<;y .ggiεS6hl4˰‡TϛO ""[dH$P\JS!2z$YJ;ϐD)++O\<ssRZݙ\i6"Э}2Mۯ=3iw JiCmEnn{aMr a|xzHp?폿+|f4î#k8:r>d8\I!FBv/rLVJG첕IXmJYm vp?)5f/B"Uަ>\UZu/ҙ ^TCcli ;a!ڕZo]Tse#XM*uҚV Ɯhj}*7SqZ6@')oTbwK?+|BJMU6,@Rym&[.bDN"4=նC,gBuɌiJ9h9,.7:{㛔L1E{) + 76 DD0hԧ{}-C#uJCٰkd6J#TEt̹cPa2!cw!0*S=^b="0J)>"UaNzU2d]B xdJ/Ƨ\ܽnn5ARVE8USI{krJJ-AtP:,GpuY0e=,QZ$$Ss[%G[GWh V &~۬Vh4vn5$ԓŢKS>4Jt)\S׾"27%}Q$\[.pBv=5 }m.5ˎQaJ$~)/VQo36XAX3zVb{U@QlzZ̡]ǟ:m;V:6p0@&ܪt$Wv\EY cMmV:D!` ܭ є)^iv.b ~Jc;[-V-tC*D4XYi)̨P&4:ek5%Nij2in vEm%rxeY z3xWh%WkP1h aom@ 1S$J Lh= iFn3%!2xkʣn59:&M0 #Bl( `L1R"580)Й5TXnť3إ~.J P@SlDC@)m5ͤ=5uE̾l:`pzFȼBU3&9%Z,[!EBi5 ]V+ 1^ B=Vczh2&m2y yaڵ3RTiŬ椁1&bb6NR&JMwfCfuT olJV%SdHW=$V*X( TPzn _fRcA39zjNҨhdUΫ k2!`9zOy ᾬ kIt 9vtlt`u~R$ 8րZD"hFymV D v*"}*O7|qz^io]@,M$C.2T(>ZbM=A%DB>hAjx1) w2jxD.c`jYwk)h0 hcJI%h v% Aر",xnj _36c &4˽e;VtP*"(YC Zc~6 3iHQ-fWbdUzõƂy0) !.}$,ZH'J seaI= w64IxFDi3+j譊#X}ѠMJw a:J؀"`>wAK0AlmRk6Xy-`t8h^,m]s1dQԭ;V$FOB=V`&?\;-,f nMEѳT$$ѲzhvMfcN`i/φ(rR{R@k8nHJxKt]ж+9P.Oڪ+-Hד&%\I"uAAuPXfR 4Č.mz Ca뛶T]{O.;lE_Qh"Ndvikn+3`'] +U 2=PhQLj0R܌"2館v{PuJcQ-\GJFdU9a5z=Jo=m}͞+W^DSz(%>ЦE)w-(w'ԁtz.7:6tp%SC4Q QF%7_!̊z#';uRl>Ն+6jљVnPvf8p= ůNnw:ϯz_w2zS|~D9;y<'@n3B=ڎW*}q7/óZ39̑e,sd#Y2G9̑e,sd#Y2G9̑e,sd#Y2G9̑e,sd#Y2G9̑e,sd#Y2G9̑e,sd399"#ea}2GX gv@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X |@T0=R&7J O^ +Ud%3T!b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=%ͭډ{{ŜҨmssjC0ZXE F(-#2؊.Np apKA#<ݵl18/0%ma\)@\V^.B?oߴ+S`}R@bT |4~b~ז&p\V-|bȃIѡ_>ЎÖ{|=;[T[^#_}d.7KE=OWW'\mcfɪfie0 _,N۰}u v0Z$"Ѥ`1A /$bwԄ yB_OwOX%gxhQJ ?va=nO TKOiυ.Nrf?n\Ɉ8{Oqe#yv̽ yÍZjwuO3TUoK:`c🃜%"ugwǿ\l._]-mQ'ֿSPNSurB8+-:BS_.7KmcQbߣGifGJ =M4O LaςJ1= N$K[D[ؾ"uo=|erQv4EJ-\V#[xw[pkbU~_h kޣd5{7u~_ɿԔ<ýN++2ظ)Wd RǗ+,W.W[_$#هpxK7ux:0$ Iv<ɟ{?|x5;=d|D7W^9~owWtp=Wc"c 6w5UB-Gq΄,sA%e:@,Q\1$OQ3JOy2W}ZN!ddܛCZz"+r U6GKޔ+2 "kGWdgYO]jtD⿜O.^:;Kp4A|ѻt~?moOկ/?z,,{MV>Ȑl *jVGuoFlq28# o<n,˟WUFoN|h_?PL>BeW~\N5>>]&|"ZSػ6$W~TVvaƴezA4&)FTbRQreʊʊ"2:9N۹"4u%$pkH 6)
7ܵ : إpr7QhM e?tYN urɖ:-sCZ Zip-~! ڶ]SM_6?)ϡ%Xrm\Lq\>Kp% !^,6.rwy7 g/a3 =025cU&3IE\X~7Z f-&)*erRZOx=׋la;6Wm+Be*/Mj/6+qŧ޹!ᐄV_da8#ZmACcɻ;rF-@'HpTI)cD::`~r|ЍjA)[ڢH=pAq1:d0Yxxq"ſJYyB! Czm4:JI %IM7,!Iz*6z8<Ϲ3IT)9S\Q '@:ԃZ^DŒ8#lJ-oV/:i Yt= D ^jF*Jp\%+8faul[3տˁEIa^gl.vvKgvsw(pJ8ߏ\}{Qɉ G%;|+p~ (oնS(ܡ`X{v ?wlȋ[t5 l-!['[zܺmnrjYݭ8` ;,U)Hä!57:Eb xRw$/ϜC7`f`v:@y\NqAo[[K[VWVok2ABBH Ι QUGI'V Zڏ0&2uw1ܗ 1,R@tB䣘)劊tj:;B<fMO־8 %hP5b H&.rM$,ã5(=19ˇ]j4KQFGυV'XC#w&@()Q &gsO t=" 7C=22ཾξ2žZ:aީ5wI-`_LRK&WKIjjsI-Jc7B"f.!Wk ҘH]V-/g0E#BgvnӺSQDmT1idF)MhId1Z0悅 \Im6cV@85^źr~cj.0s{'wQV@32v =HpYN9A4*%'F3˅-0.)\:#Jڪ)! ePCʛVۏޖP|۹cNA %F=q]⸀@ܶQS ȂR%pֱ$'Lg<Ū1e՘j̳Y5 ru Z ebpKiDdS4AY !2Q<¤V2΅H s '>JBio2"{[:8fR a,@,tJPd)I9pЫ9OE1S6s/)l'bє(A*)T&<\=P`+(5t_, ">7J}gDybbiԸoz mz[L__[)Q‚$3-Js4v888ƙDёpA3=,@4 A2aQ[m?9(bPNIO@@s8 12Pz-FA]Ou42QǸָ{$KK VH qfk8X2ϣ9=r Ǜ#ɦ U}?jPoiE `Ptx4x#yxcШz1~r y@+pQXJ\!m{LGܙH99MK 9wRA0XP*N<74㈪C]Lg7AqC-wF x7*U+P(I!F>D4HmY[lǝ~ha!QJ퍫LI㓆u7M97ޭ[,i_^_w5.Y6˸p%"3%P21ƒ XBHL(Em18(|k6R"12 вFΎ=Cs%%o!΂!. K^`SJ@%% !!#ׅAkERlk-%阢g2t~kB[ԹOd`4{A:rUO*^LL./VGsՑjuZ9^>RZ,~lB <@P c.<$ҝUFQ*.rm8abMe%`(t $&΂Q -QLj-Xèw=?;syEUkY.s~m#) 'slqV ØZw2JF0ޚ`ٻ6ne, 4A u}49 V#K$7Iwfeɒck)#9v -iV%/d np0V`]DIq+@ *d{pۮ ^k{fzn")1=(2q<+K.Pe Ɋ@N`z0ڈژC?Ixyox!|QP[ | b)r`9WH,jҪk@5\`&V*%L F[œPJO}<NAi4KS1G6Iy[gݔLyŽ m%, G3V Xs;"gi?{UHL=+&[ޅX҅`-rKxqoljx0&q~YM N%־$38=9oO^RfN;>} F`&X Aoe`}JSf[-ﲟ>}ۋwc'%)1tΧ 5]/fqp1>5 0b{ Ҁϛgݳiho47b1ؤ][eWvVvE,)Ā"j_ϯV䥹U9rk%aj( v.$HA; J`p@&,RbQ[Pޢ11$`Xfw쒗}uxW/F;dw,~)"ETTXPX)iM|K1 $q+`a]]S|W&0v2@6j  NlJ`݈|NN|/UӸ\r jɃwփ!,Xq:KS^ m s}.Uu1+@B71iڧ_uȃ@CpJ>Kl"т{PQy "FE+=ՠ(9o|<+ H[mԞ]Rڅ'/Fz"MKj\/@<+Ϣek+O G]JK:0^Y `'Xٛ^U(Zx:F*.%9!gK)oT|"SQ>.Ar?R(=IXxq>{-fdi&\/c jT!eKn;FRGZ/Z~5:1=Ow"+13q*op}uQ)ErTJKEZַW0eZ~-A'A]œ²,Gh. 8x mf@W=#%6ph9cd|da_ћ),KJjMe6]e-21s ӵX86p\7$ 0tMAjQHsU1gΕs* yZwAj{ F->XFS ;{O\ Xr ),!jS^@18-"QV.(/b rff{OTgsOkTRO{#CJ{a]a4ЄQØK6Z(H5fu 5Ri1L V@aC[LH4AkMLxVe2 +#\~ޗdEE1 ӷ>䨼EZY5M8%_|['"YaFZb%tB,|JRp 4 %BAB]XxOt05,83UQ*U,T͌ ieJe\Xdʅb. >(aFWu?~X暿۽ryF]<~΅Gf젢 "G"VV)(2/)g*B I9F;x]=Ő6A D(l;16 :_Upfls_X2ؑUڻ{iXP4Xc"sfLZ7P\$Бjer2&%=M t;c+oG@To$`!D1)@IΠXb&(P,$QsaepҨǻQfu+]V̈aĆg2łssXa7bДM] 5"֦8IzF-X<F$<\i*k?'j8'k:+]yŰņF0Q8XTDIR0#$qxA]L TL)xYXIǮ|+C>'Ȼy8J;r{ 7^nFQ{o8~?~Ӡ7G/lȽ,ڒ܆smd||qsɳr/ՕtT9 G>M6E#f&gS(=P (oo+iWNwblz?v_a$2WKdqY$WV9ipr?8WFz! {u{\BYP)9Yϔ =,Ce| w./ᯉT&۬. z-Y$ADE@ly$z2ٝLi(*ںh#_}2¾XeaԈ0]!\NBWvЕB:׆nߋ0NWR7[_] ^Ќ=1`k@+P=;[ЕhjשIkDWԇ ]ZId+&9u+lLm RBBW}+D``7tu8tť4ՈT+j]!Zt(X ]] ykDWF{WmRt(hJ*japQB6{w(5mA$R0ן{~2{v˞ߗe5&nY)H??.!J Z-.E4 {bETKwD- TmІ"X0%bK6ъȈ[>YB<2x|x o[ӱƎncgt;ZGPïy(2J$Z6> 2r/*)x4IA{Gi.tS$r ]iZIF/>݈МҌ2Jlk.A+oDlojK*ڨ6%(-js ^.pŎ;N]:OVBBʥTBD>\v}e|~Ag.- M~̃ /hF#U:uJmf*Zb֮`傓[-EaN&˄dEn5$o v}. u-,}NRJ5tY^ m s3ʭEΈ whM-QuQn Jx:uAV׆qD;]!JC:@Z3^m,4 ]!\[_@+vЕpeb>V}[j.-"_P=/+ծSO Ո6tp ]!ڧJ:@bi3_^PпJ'Bj@PI S W]fwr qȬL\󞵲gozW#W 3(QIuRev7i+ZnxU΄U8I?_ucQvFgr~Ŏ#Qԑ֋֭hMYg<= su/A}@ ĩ.N?*XJiHTpB6joeZvK4PJ Ȧ#` R˅X˯2QXd,NNA31|Ṉگԣsa]_k7ZU+mZOlcM3B3^#U j1:U!Z]UC6jhu .kCWWb Qچ%Tf5C6a֨}+@T4tu8t%-gֈ06tpym4NWRt}w (|GIshPÄҚEMB:fu΍6Sy #Z=ydi):|kق+xI.*FKfl.#ӷ9 0xQpSsR̜K7 ӆ޶~7pоߏO:č*PBsͽFT, ,7s\v; _0/f5.e)YR~_3W=u;HY᪇\yWI;!da($UuD?9U=z§wwt=ݜr0muYWZ_][s;r+zLFhW>SN6M4,iRdOcxu!)ILπLw0M7pu@ )HUL%(4ruH@ B*b]ZΛ~*3zOS8.$vRtͮQYAo9Yol1T Pك6)40h! D@, 1S>3z SSooNWÜbyh\3bCuF|vpw1τ_^=s!"2԰o8^Ovzsvlþ=sͯmsqze3zÌ>[t܊PwG'ѳ{33Id). f6ǜ48G+LyEE..H!:bb M-8%t(@`)d,!PJ4Q$7gɎav|~;c([~oN"Ŭvfo4)6}?\|>lǹX/Ȗ,/=~i~ ߟʮ28#RKب^֊YP^z5&KjShdɐ-X :d&H2"6#lbX%(;]fǙK|Frߍ.oWoc<z%3:+SUQ %`B,:7?Yd=MzCl*LW4~ [&h6UCmbGyX옯uPmoƶVx۶^1PDP8[f#}!|y5jy":uxp`.]͜%'eO+[5{zo7z['@{mWPDhmd; !-o݈O_-0/fyui|۳mlRoNg˲sdY"5i}vm7gݟQDx*KI+E1@ ^b-!E%"#Y=4ީZ.٢1H sACR3"uɽpџi~ LWƷt L&~}?:rt9,@6| c"ʀk8xHI zo@o.a%E $%hle ULd#J[sFhs^:\8 >N KiE}폖ʊ^"GѓGW}gmOpѶ.6WR]Ƿ;1ߞ7x`fT%㛉~Uǯ:sP({D9x#xPA> b !&R%#eK̆*B*dRЖCB8o5w|*S0I)LIu+A,ĭyE1kG/κqZg3-9/~Q ~q e &XQHQ ,F"0|d0U$F G_|x>̹vucoyid6~ܻ.^ZF0>ApC㉢kԷ~A#a.W :H;labC_4 g3 "QyT3I|Jv$T LdѬr# $psl3iN,}V캜ft_u; NfTӰ vw]{f,Mb酸uǭݢvg{:]M2F7q;\セ]Yw,7ZևdU`#\vy2T~`c;2Us5Uf }U6Ǥ1jmPQ;*Vx7X\/M)*^ʭJP,U6T谪om6f4R n%SY\p[3tm,wzy]}~ya^P]ݏ_]"*)) JE,l%lo8jWkC.ٽ5x<x^H`MuK@J^Dt6X)YPј`E}Z>PI1.TOAS?$ Y|rV Ʃ 㢆ה>ҘhL Skԟ(暻7X5ڒI$yT 9Bq΄(#e BҒACɢ%VSY;و.&q ~w+.^:◼w7XjQ^\}QWD>b_BSbOrOed1}1n1??{WƱeJC_๩ƌ'FlBuUĄ"HsohDR-İ%vvުt f1Iqԕ Wqolj\Q&~YC (xN%ΩVS;n2YKg2<ľMg"vfyKEL#'JBn'eEq0EyB KSqpW\A$e/A>v=.eI9+ s`ŀ~-<ڧhmޖ[rk{ ۠T0I0L=%\Рg$ '\̊A}Q;9n ,Kl$ќgQ ?ڠ +(8s^L@nF/IRnH'!cY4c}Uq&IB4sB:E|kmѳ@Z@ýڅ{ܬ㸑ǿꭵ$qGk'<*̇HhMTdFEHEP*'sjSbSSGn18߻k;b{otřL{JQuFbZ:6(x6^wy8 Ę<9pCQsfbPxQ.y.R:FCkZM Y73|x}>ҏfK+޸;)zD󢬠iL !d2h 49YY\sj):fU_٪e,b^-֬M*PTf89Ȋ*Hp"AqoTA .i3' PJm&d2:L 6GXƷ]BLU6NM61o^)6$]EEJ.j3khᴈ_: Ymb[T~Q;#,Ѣ} nnZZ;Z[aп0mtRJh.Շx$ݎi +dņ]LVRO AyZN5Xwa\|H^^ {#e>'"jMH0_ϲԯfF?}u^$8k>d`T&jjE(Ɖ2.IB! ]n?|A?9܍~wQ`fׯ$1Q̬Y|D<_o-ld3`~paW Hr6lu+~=~nH y5!gw>:: 71faG@,dMI6J˓>}oT^ zͿc& =0R ]5׽#{w Hk|ƽ%ܬ{/>0$؎Ӭ70]A[wOE2; wf,ŜlIQ}喊j}YaQs#ATqўzs@a* HFbTRo@Q:pB*W?9FFZUs>gߞPEWh;BT\)3^< F2=w˄ 4,V47RZ.UCz8}~*Besc hUdV`ȭZ\X{q|Mw+D)[sC+ m 9tp#]!ZMNWҴg] t }wVp9ٳ3vhPʚѕقLKW=Atn+Цt(գBh;6)th9;]!ʺmQki]`ÚCWWյWe|Zz($ kCWȦ}d+D)Z:Df>?]I#㍡+ۡUt(W2tu8t$ED'wa\Hά^E#ܗޓ*ow(ۡe4(E{#z4mвj$~y̆.4E Jօ)y:jsv16éWٜos (\!\ݘ$@+((yb IvrB1FVjQjZiE U+JhY Qۡ+֊lB}b}e<Y]ٖvzF8i]!`CWBԝe[zDQ$lgW5)th9u+DMKWHWTS<x5>'& )@t8BXB)&GtQAG]x| t=&镂(,(8>OX&U`UX /⋑jUf( tE? &qZ>KhZ7fF?M5o7>MR+7 In1 b}ͣbT4Up@6jo%E9C\^7IT_'{@L- %Dp%C?Fhw Ō9VYmhVveUʅvtlmfC b5 i#G㐃2OGa'|K2Eͮ&EqG*EURHdd|@8v倻eYhKRQTbDW3 h'`tutI p ] qˌztut%o_vtBܺsgxM{yF#l=۔ع}-:h7l7#2ܠ[@](鸗E^%O4 ] q33硴GOtt+o`6{?e\@WN?]yЋ9Cqt5=2˙h~j&bw3DQ\ݳ>n:pe3K ~4ѷ@HӇHћ92gmpZ-ǻiwJ̼0%|4ܬ:F쪏7`q8}N Q~Ѣe2f=='`J6C(oiٚՀt+t5jTw7|͋lF] q3@&CW̡3U7f {>e&=7% #]tIM4fCt5}dnZNWʞ ~CWV]~CtqS@iݑ8v@JG:@boI]qo\[VVoJ/G:@d-+̛oHk+ Θ:DWw q 93'oFxiVڎ pm MkWDw< shI_ MPvj;ЮA C!Zgdఝ}e@+.9Pj8UF7DWΔ3e GeؿL3q3.q#K4ۨ[9hVqǜFtB=sfpx7nf* hxlzfhhKt̏xbtoA (wZᏟW Aw㚓k~:h1VT,tS^]xOի_/ ?Ѵ]5"k>x巟!L] @~?UI7)_ύ(>Q0jw}{No,wp1,s]/q 7~q oY]*7'>>㓃хn}拔%}.jO^#SmkNүO#l;JU:Q+#PQE^H>|cz[&pBxY_7_=wR>1`_!.ޠ9->%31@޷~;)`dşؒCIY &j3;ex>$c(|xsgr}C{zfS4uu}z|۞YMIP2$ZT%pCmN)4gjJBJ9AmbESՖlww5է?QlH!~Yhrwںq3VbUE N6uI3 Z;c c%jk5D &@Qr F$FͨQ;^)KQl$Ztmp1:͛o!bwRu^"\m$.CIIS҈S3LjDh9{ɌazB4Ccv)>o\.蘸p1-^rJkU?^OD4!9 ЭN4"]ʕm tL,!lEt,1T:@2hCgG3cMmV7."wuQ6 ]Ȇ&(vʶWgn]  ! N5(JƬG5lƃC*D@ %֎]SPPU%ԓ-e ,. [tθpTb0VjBll`_` \dø%G!PꛡDe(MJ&TW!zǒلb2 \ nYEZPB] !w|ʨ͐jPo]\Q?A 6N[ @CZ,Lh "wY52| ߚU$$X,x/`& AC\6)XKI!0΄W .L\uT&;yYi{s 6ݙ(J8E5n,\ BAyўe (Eye(_u4 8 C]Rj$usj{QQR@}j46 yT!!ѿ9yu@eR" %&TYg}D d÷Aц6F xmi>#hҴ ^%>z_ݡŌ41c989h(pljʐvH'"!pBDEI`ǀLpZ.?iקcZ4۳'/x *t mCk!0 ƛAyCàҗ-:f%SpBҕkɁJ2BCTXQWW7[w{ÚdSp ϞO3МͶ󖳏0o򗇿_48o r1u:ҋD*oO {'~{+)n7؝\nvIcG[~}u&} p|szQۛgcoyxOq~WoNon3m$I_!`SRއa6veE-)RCRV/;HEE$ĘC323|"'3 \?ʴGJ&Jy4fv7IEsS=u\'λ4ݺA~Wa\72Y(!&>Q}腠(Tw);l߀ls ڣOAK`՝8]yX)K.=(z9h+0XwU,*VZݸ+̢gk*d0VWan֚;+jߢA;䮊Vv]许bk ]+Ewu27C V;ꪘTWUcwW`N]qrW`3+eWU}HXi)EwuJH*XU1Xθb!]qW`|b%GuuJʒCJ`Su]sUgXk 9vwU䨮N])pW`1s T.・ye2*W:{ؘWO=a&zDM/;z~Co<6P8sdEg A.HɊCu$;TMUFz!rVy:G&Q9הH=z/k|z;OאwZZ,r_?ȜDIT%Zfb+/2+`^'*4֛`0g) ޕ޶X~ Xi'%rW` bfGUVNҢ:Ewe֤K_`spe7sug*V6f̣꺻,zqF4C;ln*sXwhnV#cJڷ詖D+0bٹݬURtW'讘6V+bθb.]qWZ)]+Ewu,B %*θb9zuVFEwu:Ja萻*\E⮊aXi0gM>[]5"Bm$ZEUќngEyy~dG_p VjсVL/,evk7 7ְ:[&e0S%Y`gLJ{R9NZÕ+ ڿ r/ds%źR*1 ? nC̯ٹÓ]o}!~.8zf0~תX ae-?>okcڲOa:{X쇴e]e\Tٞ_kf}[k~58?61 eYn2x>ȥXVZq6)"14YٿҶ/i<)s K:u3.J57*|drA {}*걓MSoCj s93v4J[ݬaWenzN-;7c[aZBrv̼{ڷn||*Vs`!_qPQ -abWJB 7I6-w"sjDp+[UV1d.YFuIPmrєXs$[mm,b n͜-x:vkh'kr_g׬M?8z3eŭfG5Ws׾ZTˢ7uxB[pQ o)T7Y\pW]c!eK@!0\`>;Qzk0r>Ĵ,z_Ͱ+Mo?uc}^aų_؆tI&.˺PeEt`YQm:g*DQFXBs(7I?tޖZQ' lA b} Fi؛q }B,)zgK}Tue8:lC\C7ָ5Z'^/F#1<5;YYqsr΋mH fVRE.tC3Ϊw{xzzB_Mڭ Sx9si " % 1׷W4d폓CC$dqjTjmt9PI,e!`aG'"DdkemujPN*(4/Ȉ,QkΤ3%SAj8NPXˣl} |x z;cp\Qs^|?ワzo¾|o7DnH* B _ک5 IVAqb s\*+%KvG;Wo?M`C Z:+*ӷ(Q6&x,fGL*7ƕ e QM҉`UŠ#γdU ^Xj^طj[G5͢龜vLHmS+$P Q;%bbT[ `'I #.%eKu),n-20Sm&d潆V`Rn[fva B)W[R reO5}#틠qۉO;A>>ZZ>q89 hPfOoӂ13#,c4,#pk~ACҾ $Rύ3Yb!+`B 8*!, L lEV<;(+lYMʗqL"j=e{,>PbHbMO=.d:઩[+ UN(:Sʳ*L)yOљ˾Wՙ{YJ:s/[י8]$Q1iu8?pIVJ!"*Iu06;f#f3:TIVeY2YF9ᔕxnYgfv9G.sgu9YyؕOOጣ%&NfRu8SmHE= UWRދԫҫ.%NG0'kǞөb~"E(mVx]ǣ 7lfuQ7'i%H\V_I@[y"~͘"ۇ=oeyIGA B(OxD*ψҚhR@l56T.tBv_c$*$6>PXg H\ z,#[{bTep썎 1 ٤1@ 09$yNJC!BBΚlcG) !I?SlR[{A0_/T ϲ=PIE(C%inc4P;ȕuTBYd\啐Ȥ9e8u6YG,1bS8q1ds cL@ L\Xy5)ICz1X0PG3I omTD"hTCL- f3&<|楶yB~Az~II۵1Sq?L'~-~bܶvk6Զ:}yB AT$>i-ZwRFͤJ8@#\t*.ШUՊ8j[|.}5)kwJYڽʨ`0>vEp=lQiHiqCIҮn|G⬗w* ۵oe9b >WV*XHscs圵:8?:L&D4ЕKl4u%sS?w:ϵNhʊt]LRR23C7Ԧ{༷TDY&1Z)ϱ6rOi0Z3=tDŽߎDz))X:L?= 52 u&ip !iWem2lZdʄ3fuchˈU*hن$巘 4$ᡑh%s*/ךz[lo:Yƕp%"7FE7'+/YnU%:p:7Z)NNkf7nyx-W*_-F!]ɝתW>ˋĚ$bIbTHx"٧$% Uhؖ)[cUҔ11sPPr γ)*T U,[3[*jƾж -W ʳMo2ސJ~N}nw׿ F_Oi;(jQHj#e2S)r"ZpMm<#TD27< x %s=㬸MtP+AE!#3a*IЦwmc06BcڭՎ=v3;O^{l*OEjQ+˂8 :Rh^&'cR3nBȲF5U@9]Ys9+~X`cףY;q1= Nm䒔l&!E%*["Y 20G:Htȹ Nq_1qÈ[Dlqk+\Z!qyBV!dH&1uBb`cݬN= t%#1)DXҠMFbRRC5E.L\ڧƸdS\$ "iqպQ@E1Q $8d@?T!J"c*,"\͹!ixH7@4XْƓpY5/( 5Z'~HIcGο0s4y0]n$9K#୳ζ#TR{VP텙#J]4OK7;<ؘr+%C!5dZ3Ǧ+ˏ+u&ٵf喙څׯ9}FJ)@&$:Y $-99.JepT:-Zu,*9֯vުyVʫk.ᢦYw.nX6GN8WYykew8 O'Gh0)zXW01ΉAK8̾ycX&QJ:?mCiQr#NsmA!cj4FNwGni\#TK$2DQy5CH<(A3dӆ:o.Mۈv78dZj4W<,h9ˤ'!sY#&DrCD2ㄾ?g:RDrIdLafXTqL$I p4|zź`=[jG@-%)&"J0B fe (ܲ%P=1@ d2cmr1*钴,l|"8%ߠ "2&r.fFƟae 8坒`ڼO}4Q BH8%N;~ Tw_s'  .lU#"ivOG$o3v]}STZIo|*M5͘jڵQ++.l&JiЧtVONJ8|xjrQ}p2/A 2KۏE9TvXjR;0B+jIa%nrY3dy3ty#,D`(|}zy _`&KI[1|>uS-}Sw\汩-b{t02yH3_â['֟9ڲihn*Fӊ5 iW7%fLY@LXfs}{6^URGZ&'I<$qDA5(ZDAht S#1Rc/5n^XDoЫh{5}F,U# b4饌0+C*%YSYjh[)WՆJ%PU1pR[}T2M-/l1,z(aIȗG K,_1V#mF.`%7Ɓ@Hp2DH\q -0<0`,PVG¶Db)#Q xT&,+B7 XH>uQc mQaqRP# uYYhR004F-܍˽Ӕ^Aw>CuwV`x9Eòj`EKӿxgFa؉N^8rE2XZixɅog)!q^!yaeKcf$UtI!9mIq7&t 8^>1r{[$5uQ'!y O *t xPa)rl5\30GR[Y1H`DF*m3@5#[8o麩yZ0G^qLjs[TK.dmdVUh@* {-ٜ \WEjR^U채}0NڅBA/ftoa,hp (|`6х=mdze]ǘoKF. YM1ZaIo]T2h ~Wn1|;&RCuUXм;iQBuIKhkVez19MIok1x04{2'"8hOq6=ũF9cwYBUWi J B cX !3 S?ĈACzh,l=Zfx8*v('& {#V Gc,^mrKyW~ȍƆ~hM!-6(=VpjR}zM?״FLtu~u0L ů F>@NV|EՋ2A*E:}c" E>z.o[9 ?h_voڎZvKmȺl?ú((e nb[9SW WQQV A@ALZ] ^^<<M% 6i:ۯ A#++TO>nꉪfe2t9x5K@YyRqUem{|j8n̾ wp4蓕8/b"t5ʘŴgcvu 9F(@ɗ2,l^U-;_xUqj}לa)UQ8L0'a/n/_05qϷ:lF2" *. kP( Rj7YE; Z*4ڜ)lr5ʅP 9Wv$έssHN2h;kHE>ג)E Oڬ[M2źNNgZ(4-_Ey:1;˜fkRNJT>Þ\2ڔZ@'Ĕ%g"gћDr"ERl=&{퉡}ш(F4QtQ(ƑpED˩uX'{t?Ydq:$VHL94(:ɉwBHuBb۴2ᄰC5QFslMRWVi132 jt>4ӝwi~i$Ju׵׶KƩ\>"cxU{)e7 V|㲙 uz /2ٲ<uɍtܛI;ĎN8ʥ- >Zٔ` 0GM,.PjnDh:vf||78;9^ZGR-Um< Rqf un[fbW˃%7T7![cR;ĬW递WA2 \F-aOW\ojө\\DW\LִD%-\=CizHpVW\h p$gW IFU"XDWZ%U#\q W@sv*%x:+#\ L<$eX \%r:J2pUPٞ]=KRN5g:~S$"]4kXNNm#7_E?mdm 4>W=4WxF@I[,Z997 W+[~ыu"'ْrg~!g+aMHM+#8u4v2בW:XA B:+9zijByHyh%!%{q,YވRp0p?2x|gF Q"IɧU`K)#H\d \0'gU tDž;cgǪ!\;x r xB%eSe7yr1;獁̒g:U8\dIe+UL^ uP1,v]QYqyiVϮ*LH >ϮUCI;o}eآ$9o%^B9^s _6UoDgH̜M*Ze!*Vc68R吒R>k,1NL59+;ژ%= -ZeHႈIQs-= Ȭ52vvdV ;}u'kʽ/e昞攞*gW5G_4h4y?/QG`9Ȩ1Tv*BeBP\ Y6 CEq:Elie( AHQ`SjHQ4|;F)34c&b jw&{dݢd=j&72&:!2Ȳ_&r#JurI pPnVS9jyXvG Q B6ʫb*jD㒅|u3^~iM׆f]iv vޝtw6 Y'6 wCh3B s$_xuE/Ei8m6;a< AjYQd7 =7oSvr<& WXQ+;N!Գ%p{"鉗uq,M7q}&Kmm>[4'rϭ?o@s޹oԯ~[-rk?SW!*>U4""+VVg[/d O2s~,;8?VP*]g5k8{֘ȍF!qy_ew"KYTZҩkt@J*r!X`h@)U1 Mg!st,v%;;hz@5׻_8bES 5Fn]l0u˥6yF5EI5]Ӝac%2$HC4,xFYEt҂g2 i8n9Y*]%δMf~>]f# SB/IaM6l(tpnH#'h-jA0X}l,ӢDSi6KYV]?d( 4e>(9\;09@&l [ދgRĴO55H, C('!gkX+B&qdɑy[E+ wJ`JГLFڈ8rAkeޡd4lj5S^#=Q/9JI6~5JPGwMAbAq:%pzTiIO4*PO=ߟzo/5,4k.M膽cO.jeO28,Jql6SRp-9eQ θdhʎΌKW6uW%Һ,৥ϟ'eMe%d Q7$X88_՟~?wo^~ ͫ?z4F`Գ]˂h; A56C+_^㟆5\edh^oa" .B\\Z->bX a'Y5ZZءk ]u+뺼5n`MxdCgcncxL 3Uatۚx$OIbl}rr`V@8& MDrţqdEbA+2ɯz'_pXKhq ArHVgCSIdF6Pzpw`v0J(mbjta8chS2a;<3ّn'AڒQ_8x+KbPPTN&fAhau̖m T^zN7D4 /GAQA3Vp6+q/~<̛`z6P%m-¨m&ΨпhtMsCntz/C#c&O ZЪ$[IO?&ʰ&m?X!?i>oZXËp:x}_jTOF;}2&Qb''cd4- N8!ɇ%] ;`% +{.0B*6`#&LHwB/?Z q<-_5^~:=i8m>Xeߒz":?Qd;#(}Z.EMU;lID(Eu\>#?;fóqU:ղafX&^S4蛈7SmU&W vxM{Kyl^|~V~X}r$/HN~zr@Ҽy ,}4_ b؋I^J 顟 Ig?ح;qfm({=o#w'&\Mr"ڞ f-g<}xQoEHn`߯w^~?LHm6} $;S.dU=&Yhp-5;oDR ۆOR[Dwfv*Ckb&-b 1i 7{v,$hve/Ltѯ컊ӎruggnomJ# ![N9*dцȹ& BojrΪdnÊCh 0x;8iCurx< NK'=<{@EC@@ YFK9>r+:k3eMt󩲦ej:T +u :ZiX+ bfɌ3&3Z&^#D{}f1+TIR\{B6F6c P%#rP>Ee0>.dsՠ?a2H,!AuYጦi&{ZNXAJLlW -YZu|?e;Ugzʑ_4  ,Fcg0 xV$$q߷xtH|tàqΡ,bW?t)|;uRmٮ Si36V~Ov˼Q'2GA-RNRjR@tLSl«{bdJ"W/gm]wjnФ/lʟƼ^6SJKFYz$;b/Ť39D1\RPXyCp٧:zi~xJʧϠS5lm5aNcX*ބJvb=3-12h dt6h+''N0NINWɌ6;sCY/O T}/ wb9a-cJ-PHD [QSE 2d#(LLy UBVnYǞ_cv׏͂S?, ~_Ao~xLMhһ^o:}lmp3Zb¶W:hcGN-|΋ aT xZ [iT'TL({]B$-=r}\oUR2i)#6%Fi!g9Lx!Uϴ,3 rJ(#<$}Q('"X:Rzm̅;0Zx_7;CooAwަ10Z^ >m5tVPJt|[7<[v0fy};)gL\%UtF9@ zfu^Bj^b8UGjS.URThF408-c*J0S`$b\"Jy d%JKGPY8<%vq&++qTޘc|צƼ!=DS+ͻ͊C|y=h382Q$5⚊'5E2 0@gPсdDviPEgv&ԁu;Yavg!qn4t &vUBUZ Ut]f?xrL^ c$DdDc5n{ 0 3s&9#w1k7p+62GԲJ0jA˒ DỤ>+nw,(z j) v/h)YK E$jV<".L =g;W%{FHϹ[$x6ƏmGq|}pKl)[^2D,8CZNp2:sγVF>e24 ipԤ(Bc*$B teǺҙ8;W}AIS)e8r4yT<ƨ4[i*DTiϩ lnK3/, !yX|ZEuH)c}`6;$ϽDyیPHQMv.2Ȍ*yI^5Fc4[eS6d5LŒT#Y*ڜB,r= !Pr64T;#P2*;~kC[T݁'rZ QPQrEaJRfZ"%U'RIC~(x׈u1d*wI!,犚LPLBbSjX݋-FR lGhl N+мV$DI#[7NJNHkS!-_DZ2zEJ͐^I#N\=K`􉗫牫iIpyR:J?cuwZ Z+q09p[gxksb ^zX6BFn4h/xC\h.ǿ3Jܸ?^;=%Ȝj51&]kbtM51GgN1KW`\R._3+kqFi8wgDRՕǡŕǡ8TPy*Cq<gj^K&Py*Cq<ǡfUPy*C1<ǡ8Tkǡ8awZPy*Cq<ǡ8TfH -na?QKdIhT@.㍾% 9FC$>HM%%VDfqbѨw= k6>:>-t`Mͣs:贯h.ii )X u:gdJLEaD ,5Fnnk5N8mńi>Yk?׿^c8ج>ò U&خ'^oڇ}imp3 Վqk c30c'ޡ-ƶ o[>. 9$;:YD=Szɣ6*[ ެۖ;|z'z- Wɗѷ}/e% Kdrxьr%PNC3ĄHd: Ni@T.L ,dThS?3tQ"$hH%VL=3VA$s5q4|#/A GX~4jo `묉#tK)X+JA ;4(d>}Uy;Qz{#4~.e|[7@9e?迾3&FD&dM8/D:wׁz ժyUDzjV ͪNyԻ`sVHS1cVXBD㴌3`@c'I9!$!'c[<%\`}uX÷Ot~4q9|_G#NJ $AUD0PmςҐ1\d;'ODXraF8NIɥUZo$ Lcȑk㟲v&Ύ3ӵM'u k}p<W`k!zOgRݚ< 6ѳ]xGo'3R.Yȍ %RFDn]B]zcP ?!:&غyҚL[o~K-7χpyc_y9?I1ੳp?Ko!o15?ok_˱,&1UT^#0~Vb/[n6?moaP.Y|Lp`8(Vp<)PJ>֙|c<;v/Y~Ќ\XYr? 6*FQd nCN:,Jb5l /߷ jn:ϣfhX($pnQP΍ Is!eʽg Y<,C( M8it2[*5%P*`~y=WKtiZ$LIb쩕u韒% _fJ^RX L-s^Y-)4h dz't~cGK^HM=8t[ٚPJ4+Q}2V("9T"UU,McX LXQH&uJ<4zS1;!hP4`)?譤r֥~ c?Ki B=u?N] OAqOeM(JC~ 7QQ ZL\|ji|./6xBu *?Њ 7$ bs4#콸^6qC`HC*V(J"I ivt:}a@ڔA#L9DGSwd"\Q18 JUtiC0UM@ZX.%mq9l.黳,0,sc,>zǑg&_uviJ { zk2כ'/3q: #RdS]Rm~ZqKHU*O-=S]:5ܦf:9/.*gwGAyշTvatRsaxmZRl[X Y ]@#LDS/|4\7]_'=\FV !WY9tޥEI s>]zZ~,#[8*JU/a8?}Ϗoߞ&og߾U/RJEѱ|>w3-Oٻ7C]2OGROs<An>W@~U~ݢ-:N=,`bhbS!6yE.bޮ%(!6SVW|NVpUܩ%{memQv٪kBG# >A" J+`hD{Aw@z]EoЀK^Ξ?vq69\`[THiП!HrRIǂd(P0+cwʹN&'ߕFkhmL,54[K8o);/L|3esy"[(}y]wŌ/[a\}פ$V0#)6gF\D`".Eb.ٷj̭wԘ\1^?f^`s< yŶӴKgiա(g3*N$ ԭ}z* [M9ʟ't*rr5iåwb4~թV~?9b:Owg&-hhԁI)0yQJ&R+L9rkĒ83kp*VNBN tXgfh]tY'[]ZW?Nuaý E 5͟.>)WP7 4/Ult<,*A*BkZye`/77_Gz4DM$tb]/=sWOz| W ʠ˥U`"%8"F3cnaJ+J1Z ] _;BN0 Wy;G V$6kgKjEX*M4L ⼧u4 m0:/X3e r'"C`ju*jM0f VF;J1G,ĈcHj| <* 9N|0aI" T)Jk;^[⬟dM+:or`kdEAv =*Ǔ`"P P{0B9Up+<8ap#  ʴ!GkQJ'brKpxrȌ ^`A ƜZE\ؚ8둱=]5,lmdle,n &Ӥ`دn>T.\ί f0>NxwXJy5Z*0s嘒(q%v hnHƞLElR!`V0l;1a:0- 9fnKmGC`b j6:6DmE; vň$,JBV>X&R˃>n (n:+Uj#88f`!S"G9 ('@q )H#ȁQ1-akҨor$`Dlm|leD"vxo=Q97j3 [R C0xsvmz0BB2gЪĄ3[XA`Ir,i%f.o[j X4l(I˸H:\p~( ¢ax(1\@`*M%10h G:\ .ixH7-@X{q s뜵Y;@~ǀsc̱|2!KyHE>ג)E 9C ƒt1p] \:; XJ;T*TBʟc;Wߗm(8tn 9F G)u"DukMv.|̟9Evumݧ>h?n#4)fA qUY# )BDrˆP11",DD-a$EV i8-oĶ&fָrHJwp 7"%eW뀧'羃!KCBVqV&JnEruTI&XCV' Ҹ||[qȱ#Q`[uȃ=y Q ;eK ݬomd*۝d]V-Vهm(IZ'ԣ-]/ L![җ4By1r qg^0ʱpImg)SAH#H4Rika|#oӲKý9^)̇hII\;cSRn8E%3&%akSQI.ߢ: ?TѬp7tѧ[#Gs *r꥜c,u0>eo ׿1/xt dwI+_Aֵ~wpwabz_n7eh8s9` i. `•a& կ^Q݃_?62P7a sr|*/4ߖqq\A%גe$&ɀaFKdk^xMl yGfpynn[޼McQ 5BWyim?_,z6SKÏбz_?Ɠ˧ZIHa"MFn4|01F. ~EKٟ\^ξO#,d^z/4@eӤre6^>O ( 9[Jլ:{r[*z3wüq4H*Aێl_]i oΡp2m;_/.avT-PHT ņŚ:lIp9_hp]ܲHu}RYeP?l5&5h=04Ԙue;n,I6v9]MwqkV_ś\;J[Fڨ "DpEc%ON0. s(&?ĈAuCl=)S.{tւ+Ov 9Hq ZPMM52%XXF 3*1<4]Tƾ'8hW/ycFR EM?SԴ\o5]o+3 dB0r~nl!+*1ezEor)4[fx#ڵHzlK~d1J(:(SH\R9‘w{Udj5Ԡx$ZN]6wghsq~ˋ^Y%$X( ;t:h!1eQpC5Q @C3`r49zVKyC*W۩vvKg>Rb[)w,ބ0\js5( d9v7TȤ73UMxuTn{I|[[sD˚lܛl?Kh{gŊI\lŊmc{C`I9(&BP5; UTsc%bD\ͪLe:Yq.E δq{}|ḫGYLz85gd{WI` \]$%\Bi" \%qJjpː•}Ү \%qGiy*IIiWRl/l-F;Pn*{^]OW=aűD=i-Pfw7Лx9ɌG="E3@ =僢rgz;2۝{FNۼo 0,63ؗ HB+ɓx濟be˱Z7c9ķ"cYR>Od5㰧i06x)~z4Z}N&cݓOz{o^DYAW. ֩P f[*лo vx!sOHy2h0.Jfӕ1W%<;V>NH7^@/+=0?cۮpÚ.1*$*$EQP«;ʫYӀX=L8~b!WbQj[ʕޢ~,R:gWD0lUPܱQu^=•#NgWDbU!WZ) +G=\=w2;zs¸)/_hr^^#ƒK?ˤÚ%mr[gmxT ڥF{:z¤є]0/I3UDr/D^eĮ-h5yyG?e M_]:X?h99~D$[ ĝ%,u?_4E z8V65ӶkuBTFlJDz1!>f+3ʎ.v b=ޙ:Z,`}ЄWtbu6fLL咐<9_- t@PpE5Z8:୷AJtLB9Å8i;qT^gW{Ga+;]dm.XZhC!t]VuB;#c3i9A*$0Y+ᙤnVI+&:HDj;iӤ!eγThRc錜} Kq JN} \ji'D;9$4 L Q 1e\GMv5AeZJHi@Ldą~\̽h4``׉WUdWuMyWg r`Q/-ǟ1[|a3I76e;0{f;`0/"Xۏ\.Q+\GEQ\=.LavW z1?*a>˒CǪTpp;tCo8d`^!~⁌k9G!Z\bv ɀ0852" ѩ a>ذw6¥Yg\|rH+8zK i05>]`Nf@=ڞ9 ~FAJ۫_ƓO熌n܎T;ok\pF{'Ͷ7 0*X5*UBV MtV} UgV+#tαsuVJsZW]csIp<.)}^uҚW2D)D eket7<=u>HEɼ^|JZea b;qEf]2ǚ8'=i]gxcƮac( |~>)rSb20/bqw.)!ٔbvr%TPK}t$ ;L*[I?+^~ !suP1,v,]Py9id/®3>9nU+r2l_vG"ē>elKOlw2[vKyϟM1.Qyӑd٤UYb5h(U))E''U&wښcYi1K.zZʐ=ѕ3xZzZ%kd쌜؝vb ]7ͯf|úv? AѠh8㈍:AFE3V*sRȲi*|n}h`2!EM j")Fd\3؎3r#v㊉y,wڒuM=jl AjJa/g=D/&r#J,I HqPdnIWS=mVQqrdeʄ0tsGy{$7ǃhQRސ⃤䨽)cAH5D(RKcnG(̱_Wu,knمw;ȍ&!ZK_]^? A0omuj]Qzuh{ZlK7m6aؼuԲݻݴzm&(~WZnd͙n rG<ηtܔZD?qސ5_65nMu4&}xS&{m%[{\$_E kGU OHV8I# SgntHdFθ< Bò;`,ET*Yegd[D]q\SvAM _'f[fr~Zh+Ϯp3 `DFsNt҂g2 i8n9ڊi^?T8/mj46KQzaHD hJUW IDD~R 0V < Ei:O']/J!J1P8Z%h/\>3}PV+ead=? ڏ-_;KAfyJI,Jd$4s$&{Im֑',riq/fgVx d C&8㒉Kvrn\≟0> ?gVDUƌ.pDէ@dƦ*xM 2ijs|~V(bf7~0Nzpk iVV򷦿^<<]S%9XGfhv;!0LmsE{EݪRZwΜփErFT\<~j~ͪrk?q /k+ڮש\RFjHۆ 2;ĘƓh*|2]/zv;pir [GlI6W%]Y9OK)+bMX~A9 >'|0Yñҩ:/:8 vo;{ǟޖ^~:{9{kuB+0OXDL?w'`ª[M!ޠnRz88-,=Cr d/ v\yq"cp6eqX{ 7FV0Cxˡ6e\[ۗѷXC%0E͏mLJٰFMe'G@|v`$]̠$t0IS39\hnQDЊTodRc_כ1Es)ah j̈/vJ.? m)!.[{_lk>_1= ڊ@JEwQ}26:@&"FVg3GGA 皥\%}ge MQ{T٢,.8W(2lM4&v @c2?e7o^7AL_"gpKRӬx2g85ݥ]|n^84'8pu 45"J*~餀TY[{5+>Dj~~[6/U-(m#+(~`vnqu)1I.)M)qd Lwϯ{A3x}pd؏'8[O0iG?v4t+̂d2cw@r"|V>j KXkFd@3X2rQ}fqls{) t/G.]X^3/бjMvrEy>Z`lB0ݚ_oX痙R M~y(rΎY~ÅCbCGa!0`ٻoAڝ4Q>7#;Nlr\kz*?VaCmcd,m9-38uejo1zW7\uo:cѸ-TүخQy~3&'3TlR&l~YT?;ͺ3 LDCRzR/_nSݽMŸ ع2EBT!q˅e0cJhuӌz_9vȜbgs{dR2I$r  Ēd"(P*fۨf^@^6.z5KQA\\1rF 6#2Asʝe$~F4ADΖsĹs,堙"'O"L +e%RN2^g)soYؖ7]@ f\3vTu>ĥ3< z@ыhrC:GolZ7?FXt>Ӓ&4&Z%hRL4R ˴g*YtQ  HǴ5` 3PW]Fu$d3a~BFEI?nnٮL&-ukO7hyC22&HJ)vBZKu-%vi}6U͢T\12f&^[׍ڹD?6gۛ.US N̮ƽZIgK;98+VNB b#Vl|vQ%*E1P)YH:*|43:9 828fMr}cjݻ6@ۚ0dp-n܄MJٳ 1LܞY Y9GԀa 9h37G^kjEYm mY/? `4E 7xCܑ8kքx]ssQq 3&H e0"N*BZ)hn]7籘|RXYyS.AX&aK=߽\5ܼ| ?V:eoJ-=~6sȩyP jV%$uRBTk琪{}3Rg ~֚J7/tʟxv䴓S;a'/ E `˽6nC@L.~ }QvAQ/Y00EI (O%ʃ,\.șo%t+09 (! k?""tUp& /]`8? χhù˕R7JM߫[>-׸y>q0bTΥ.ƃI|_j+b\L jyqX|%?'u4~^ZOobY\o|[\9^ ~>2-Ő r9^Tk=&1j ܈/C/Qb PY·eDe:!2Q6_'_'^ZUZeּI{Ob/1gF<327vkh4+,Ij""R.hxhbm!"9"FƭFx z$H\Ҟ-QR% Tc۰7F|#'9VF^Ӵ/ד+oMQn+ZG95l{{}e[e\U'υAgXrx ,pn ̙CNV)1Z {n8G:}olYU?3$ZAf>˖g=l۝k-{ lkPoiRNn<>@x1 GNhģe1j$*zh}ނ~:VY~r0CX9ւ%(e-4&r'u bLImF:d8NGXk2x:WΗ)җܵ oB?|;,O` kǥ璻,FkJzrPeBXQ .=_Mi`(FxJK#&)a7QX_!gE8O'=vܴV}+ڪ1;j0J Ͱ'$iƂH2Fa{{I#& 뙱1$S 8 s)P@;huNTI )5 #n`tU =+_}AMnw hi+E>}(7 Itd؇ABOnhI#CiBxtJd I:prR?-(Kc(࿤4!S9RI$qNb+vd*uiWd(I3N/*71>NZ蓎)  !'US.KG푌ϙ -{Y{~][gP| w'\=v86<)+" 5 䢶I.Q1BM6ab|Ű >a3)Je܀ɫ( V  l~ӷy\:.,gvCjwVBH1=Pȝc QC6%K̡Ce }om;۱*LQZCȾL%%zc&)Ïh` /4fZYKn⧷ޟ%Q͈S!0$#skbI\wb'rt?_|=^'=ŵP|!LjWO`N_ "'?/*ձ hǯhf-H۲]D[m&gkpyU )!a5 Xqr0X-+`L2JX b5PZRWB+ i r%P)8nTWI]݆&b֬꽪nlaܤް7ܼ+XwpZ]`409w"Q崋űҒy[vP@  }0j4PiSTF+^˸bb o{;ȭb!d.(,b=6>酚@̚ ް9R>\sTL>F#y%^s#e&{%­tk.%9LJfdr>6S+񾯶%]mj; yC*S+圩)+# QL>u=LLl}G]5^c*DWYTfխO2[ɒNr;?`, ? 廼5+ݧU8z B@VbRưże =uCyYrߗ} /{{tTn^(~zy?3^:$?b7 cp:YK%4.##`tXr~X0"Po79]3xHpuSr!P״o9\ds8ҼC\,R=s;p#y堗\= E $\>?9Wkx ☳B9D\VJ4)TRZ`xت{0>Äͨ`OkTM}YL,g֩] 0tϽyll'YfRʡ>aJœN)onzܠW Ol"3Z"9qLFTQ/oΪSm(^F"NC޵q$ep_ |fdGQ-qM\e[k!/#G%uUWuuɔA$8BI)8M$ )ƭ)Ē`iL  $΢`1h\BN8\ho]>ϓhlfmx,wzt7<.eU/ 'ElxeLK#hBD8ew؝kleW10i-y H$zCX!5: e-k!g 87|L4yyna݇|2D2GK@ yO"hxi ٪QVV\8C;?[OK Euy7< !k+P/}BIҜ&B\v¡g Yk  &-B)a<yLNQi"iBZݙT#0)盨[S&X!ax/߇Xi`D Kg?-yq3̉}N@>:z_{< ILM֝ea.CU>[ +äiDhE% xGw'Ԡ1 rNR+)'nO) l\5V)`=u: VyueJBŒxM8E"ΎSuG.Bl83bj if gTR޲xQQɩ09g0AujC_wkV=;SPuk.!*5ŝ{jZ9~]fz|p!iYrAj6.veYmMٕ.&wD-n$CƑ\?aaZeW(p,|4'ZٿOQ>&7^%|9UN+sKd&M딞b4;rnT!O6?].B|??|zy񏿽.(3g{vꇗ g` $O& X3r>nr~^t0_Ͳ(h2hK<].0qzq(4]l `BzOd>qho>47bgцmukۼ B:hq"jTB]W{l jWm6]mQv{R;IoD p JpTZQԄEJ 4j[T>Jv>\Zz}!=oK#:,Σ1)R#2EE 5Qs/Mb4A8ש`z3DcZ;{ dVӺyF,·~QdbyӋwժ.iI#ѪOS5QMZ 1 @R(J/Tģ(GuA<<8W,j$T$\ @ҹmLn 픽xhr4xjin #'H"zEuXz:h*cFIo V;g@B!uB&Qt&\3ȠOF, X UXs#ڇZE .RseK ?ڣ(yE)Vl B]eŹ\G &B‰ :Q8*:j%'׊%e](lO(}#=sBQ$x+e"yDB %8K&}w9Uuf/H%LѲ6\ ܣi)9ϵHUu1XR Lݕԛk>%r?|!Ͱˆzśfr\rĵ" :QW~;& zѲp`%%Mż%C1e5`XL]#G홓m-M~l~7H?^vk,ӥ4y¼Aְ &հgw.kʜ=W;ծ.aLF5yeOlwU&uwU~{,b*RKi*=c%9k鴬 IaCR8?~Dyex[ye>-&,xc pNkdUAP2U(p|-i~@Bh ^kbB˂9.u9A&Goa[JpرCm}nyrZo6v\SUs^ٚ럄o/PTtĉHK $bTh Dpiʴ{^IbLޓ\I  Fj8ϢCRNCTTqGV62FndlOWi [Cж'Eަ7Y3>]ObMȪ*Lɟ5(`0z?Wӯ֊ U$Z6RP&2/)g*$HgvcyN}Ӑ=#˰ɕJx%"*FQ'LXVmZFٍ~4+&池vkq js.j7ui8^Dy׌IkBE/ْ'#(*nBH4m55*& H*BF5 *pЈ5HgP Gٳ1Hՠk[#g7F}#d`<Dl?ED2";DܚےXĩ&q.KFzCrbA,򠩋@Hm65 `)㌏> LhI#]i*Bw2"FG\Fi5.9Y˸:\pqmcQa$I $#A8B.hx$׀IAOc9!oax_Wu[M^#rn#>sq0:q .ybOhI al(Ӭڒ62!0Gvt' s<F) iT|j>)*(=@G|ɬJlꢥx sl;>U]>,;nVnOg_jBm:T'H )xkx`[[ۻV;:]Qf!,|onMwwߣ畖a2mo~u{&={ϦG׳"ezKңkotJ66Ӭ /'2k˭͟6;r@4٤TB !( `ۨeZj)` s,Nvב^=/7C 7%|>Lշb a)TgR8ךR W S|[nqM aK(-t2ims\BD:}st \~Vz*˗4;{$qNHB]܁5F&4 (J%=^;fLGp粑DU p&q.۲5rtL x2p[/J90\>}A4y\ W sgڦ_J${3M KA<%X,8S1 m˕cN"< "d^nظ(URGx{ubrA&?+$R:&ֱU`Z$BBC"ƜEK-řDLIP O8K.5rvG!znGBM>\\]VcnrSwgr%a1Ýuư+Y/ʐU6kH<:/C<h2ly6a!FzAFK#ϵ"OdRB CV'. 2_u8N9x<"{;1@"j4PqVKZI51/T+pvRߚeΣZXdv_ !+CzRԶe GMOdKR i Є0n$GiPD+o:Z$)hcjSA'"*%;ipPIB tLL 6]v0OoD C!.߷)\Vp*h7uٺꀠO-5wE56-z}p#|HÍj$'Z[aI$BEZp".ydA*O{u; uQӾX!r7ӫiLNլ|װZh\N<p.dt]<8/k(Տ|6~'y*%RsK* h^j:ʤqKR=n{ >ð? ^7:&wC(zUrc9SG tiG *嬶&)H3Z*Zs W(R_W!3|7L10c2=N륶̋{sś?^‰5_e4V7g$@i: A//Ş뺈V#) ~T"F?"/^Nj/";{ YP!/g>fɰ[m >#9غV-jleOի-SnY;8/|R )H1p.LgM7#Zs-ļw0yqw܁iǙ7@ӢW,١AU}nQ_Y7Zu~+GxKmT4 y*`AH&Va &ݍ_y=Dnå/S,"7\MOۻFK3/_vsXZ ą*cIZTr՞ _SqN;pXTcLE6T^*{oČ'U7Grg?1}Wf ylDv"Љ榌pg.OҮؾqsIKe^ @Y"kJ:tn{r9DQ P_C/yjC0XIυe tM2K I&g:.GB0.1 .f^>A8^Mk?Y\xVR/ oIx744~Dϖs[TbA3Ee\"E6)31īV4b\Ǘ&qPH [{CIl"gB7=YӶd7~oB6ܼ !7M,2jSڵLb$cFUZ3qP*VHUjI^}OdQe?4iqWY crv7bhZR`,$Ccaœbyvuax'|=djRC>:hm)̙(0ֿW=t+峕aVieDc$qLRj)M{ p{ lL6꼞 .Q"w>&, H&J, :n'1ֽ*5ܶjhJL48fހ_\Hkh4+,Ij""R.hxhbg;xCDrY#!Dc[c P!Ts"JQKYTɯ*~n̜#x-zGz瓀ӆՎmLr=N+Zfy&fbɯse fr`r ď:] nʟ!7Yv]G?B>g"'7 w?g1Q<\~5N'!RSEi ,o)\qV%sf_%YTۚWd|?"EmںZߺ6[BVf` ͊EeZm0Swu`s@2E# 8u5h̜Ϧׯ2"U1efvMk~}(|z[߲v6 j(?:tuV<~+o3Eݻ6J3* H?[C"s&aLDO,hn0K_qZ1VL2koc1_qJ/b(Ooc}9f~brz[߆J.}|w55髶y|oMͶٌ.qf(9&=0WF}xۼj:Zތ ,6 xu-"="| T -5ʳ??^L3FWSAsj,~8CݟGV32p\rǩњ奞Jzɰ2!,KX;x5z{$4% 3xÉ>ig;bkmoo&I=cd{P/qx/6q{#9( ~:e5ǰP<41jƝ)h-a&0%aOHҌe%e NFLB3ccIgM$QH΅`#ڴO9S%40j̜h ]B2!EZŋ?R4>LЇk)X'ɫ{PIh @5)`d(M/N,葴mT~;S\p-BxF% A$U z2qJ | s7r*[ <#IeTe}e[mOV  [WYࣅg`<,=.4x2\MoD|i >CK 8SZ䨧&FHlqip&×3zE'™<`քd2+ex5XE )<%`JyݬI@ia5x8Dn`[i1Hƙ^/_d:y?,v%aj6l U=?ijP8qQĴxLxUIf5l"ZKU{tl%he0,]ewZUVKwWJiZw'pb]}F^~Dժ0?oLۗlScZ ?z$†2QcLtNQ9unY=o3;1XG˾ѿNyHn%`M 1Q1e4:{M#XXx @[5@92ׯ~T_UUx˕Wc(Dy? uq-PV4Vxkq˹κb107\ꮷn?eJ v^iOg.Wvn Vj& 0;)¨ DȹbWK_!b1Ýt}MX fz;沫Z~X,[)IŞ DR*c1JZ,.$> kQ[$>v5ߣg=3A ❃q2Jp䵦VDCh;VZѿCdKιh2BέMB׏Nj;vILˆ1 X0$7f<\ Xv`$5G0/4>/:x8V #lJFj <XN1pNfvS)v=JgLo5r H C#Loa'/H޼;hܡ7l_#W}T3s*]igfloz#.\zVeمg4kh΢r|[Y y{y{Po6YPIH_W|u7=т3|m͜#J,+Z6rw maܸS IN U |v3^yvBjzrb8])mN1M(טi>hhLֺ4n֖wiFWˎ,Եҵ<0S󨑗i"/-IP:( -fZ\>{sxr{yDCE|ĩOY}U>,edg<Ҝ,qk_0br-k^ڀlڀ'6@ʼsr>6ߺu je1WݕQ}UxS;GL^`ZZ{n:lJA%\OM|Hq1@ZX 3Z[[QX0S1KX2%4 (`z JG?}v!pϷ]w8j9iֽ`~e<Ǿ3|z&h"Mߏsvvȴo9eVmӲ YsZ. !ssjbI<EBhwQA,GlQ\:GVjr'QF]6 Idsd\]VV)+c%U6X]Tk!\V.ִݕٳ-fy< ޔKJIbhG-biMD~ҏ⢮>ؾRO}L_[ g A.{2IkmcGC;/5>,0bs0`؈,XIr[lI|N[R6b[ݬWb} 7`{g7Ʉoޛsܻo<}C^f])wgkakrkJOг $I|tqQɷ+&'LYhSR HoO5_~/뻇5& |y{qM_s6GֶEc7̨ٲ k?<3.63`'aݼzɵ'V˘_|!xIy2E9sLW"Kcu7bщ#[ēn&qԦq-ɖr)%[9&d|Qb`bga =ؘpZpWmЛScQ>RU8pu]|*h+7)qsNo.X :盞ЃL՟ESHşyW`hR=r0F&U[̮Z=8$4gԮR$I'"IH%Tc6r!㲷If٭ e DX[I0 l)z-~H8p_H8;0w@uQ'ۑ5Z |DDEDMDFD_B.*rD:7j֞d*@tPIP /cPTo +/1`߄t 5@<4`^bof?A=ވqBwzWusOw8) RQ>jz1Zpg@a4:N4!ՐݲK//adGc49y=gQBm3#7!5%6z]s&Sw1WSKwFNUI>+ojP^ ګy5XK. R(9 hUֺFkKH ,0iP`dZEU(T`ݢe#s5DAM"R &n` B!ƻXiEqNTdա1}kԤe}Ġ0&)H},YG&5j&r ,=.>xj. DƏ(p({`@YCèm޵Cz1a,8@f E)! &08-n(|yu6sn_I@bS_+>udd48mpU3_%ߛWIF8U 3JҊD|)8 YsL1 Z >kPtl95,Z [t)T˘udU-_+ɣ3u`98q.\۫[72ۼ SlIPnm=j] }^&sG.2q;nyITeR9 8Wx@׏wo8-zٲ-UKZQH1n'U'ɭ} }@>x̫d.(T"ZVZ!['PNՙ(`dvXj*`C\T| hEZcTN&)r1z]CMΓOM.(8oXY$YV{6\췑%z۩V4E4@hZ:}NTB%6YvWcLm ʦq@>yyh`Ə@ty4n_ĪbP9߼ rlHv*yr6w|ckPm1ÏPHPO;fPo_oA<» =Hwc{zoދކaa#fPD t֑X05|PDN_RYilϻ/4ߜi_tk-];2t>%DFsY{rᄶIlԤ2laXcЩQ|̤S$,%9lH5A1a]5QE \D UZz&A>W߼ ;8{m5٧T Dw@_]: GXj9qEvKP1kUkRSPð hSd $Ju΁ b:]b(qv+*WMG2ɧJ+z1=8y|ߝwu;a.7_gLI3^j2&oɴ=l8^$jw'[TUٕP*TYJK$źJΉn.*1VZL2j)HcV1=[ՠ G$VH&2yh88{l f 8--7-N 7[̿2hХdyryUPPk !:U\r_T `N]""F[ACsR1kٴ>d0*o=6 krC2JfӲbltjbjphlazOcO\E3 {GFkt: Vٹe:R4,0Iz%r[#rfo8JuEeq9)g=_3bz`G8Zm#MTaTɥM.(*&BAt* !U\4cJ+{j `j3T(G`^1g[m4䶕`*C˫&n\l.hF  :Up"Ff%aLJQ| bUdA .]<yZ{hu L]] o#wFehF'=7f?)Q 7~Df4$x`::l]{)H9 {f< $M-5>YO->y5C F@FQ[EҘ8tPgfpakgîivGYxBnmuj@h{׭׏q~.bw m./$;ik-Zo^tӻ6۝%m}Ҳn{Ozo%Y=>nf-n߈CWi`S9@Rnhnt w*unFN6GQ.ކI1!9*.ĝ< ;Ȅ4|oǺڮ`dBڰFgc1`ͺ ɏj,N#ZSbLJ\sN%.ñYӱaoefox-zLdyO z2A6+IRsPCcN>yFW qwNq?m?&j}&ԋl" }ר!A.c +5'uMx3u{>.`nOyyʺgҊ@6[QaMUEE S=fFDJ`!E= B?;Mm>Zc %JQ uDRΌ4"* 2d-(gMrXr '"ibKM@E4\7$6׸ٌϯ^4w5ŗ>O1`~;4m! g=5Wa$g5j0N{z4԰:\_'Y*lQK_ϼ[ù 2Zi:4Mo^lUsH:e%8p!9?) ▯@.|jM|owO石T)#T?׳gJ8_ЗٍuSuPlx(X[vXQ8c8RIt (~U*3++UD2%'ߖz'ϯ4(9 uxr6> k?Y!6ҿ/`8KUW4ގ5rS>trvr)ݗ ߟN˾]r ~TO`ѹgiDTnd(Y&!o~Ua_W7hYX^Di08 z{W뼣ǗOnk+o7r[WUr/t4#b (/_@OczD C% dar {gO?ß~<׋O~|2srWUo0AJrgS:`ݪk9WrA|OK#C+8/>.8~>0{n;t7/gjx܈5^mD:u6y{qB:v8g I 3V6N2 ]̠ t0,:NMXPIEcb^IP❿sr)Mt:4j MtI 8שe_g&M9`TLlyz͞U&mwwƗ^4Vu;{ SݙH`gL#P1L+pq';U_xT|hyƯ.2tE>%}X{r0y'hٿxک{kRХCiBd§Zͳjg*QoYX |P:p}}rµf֫8.Gp>qvY4NJflwٮ^rQ']Lޛ]:8|WFb4Oɷ/(_(q="^\7.c +Xc >Qg ?k#JOx4zBM1F`rRY Ei"R'i-U>u1-9T 9T]eWҪYD|PY;P0鼴qIʘdo=FGIg;۴v1qXKc` %$Z ivuw9dF`Ev7(Wzch9 Ay!4rCRSg.Dc :l>ZM'H"lr::|dDipX4"^d,Q0Y$eWTndfo(I P`WefAq.,e ̓ H! *uqPt@ JO: v`r{ܧM@O\ {p{G>t@Tcnit1Z l/ Zب"{].)Sѕ.P9)jNHAܪݷxLJr Ùva>*&I lPފl M!d,{ɭ|L0#07A0-dScrc `*,95@ 5|YFyhk 2A7? cXCF0^"֧%{7J^mXP48ibPfnUXAN,p3O,jTUQIkAٱ L$Δt 戧S)R eY͓68wO5 iuM[ٱSmWoß2";S9/3n=TczfZk-'ە }|%:RMy0ԕR)CRJ;K-*zl*mzBbPi+;Q7@u4s`୍c!TĜ7~cUeUPϕxM7 {h>M +XbE>gC$X|>Mbz?*(~:9ug媡͛`Etb)ݹ?nQY|VsÅy!PK@PnXFhR.J'JrCE4Uc?? kRKH(":Pƕ;(\vҨVAZ8_k'PHBLT@K$72"T&5ׄ袵f^jQjRE]|~cOlV-b_^e%{J"TZK\&ZOF0Scu^ePG 1<1PHJf/3TLMX*,(c6:jZbB>nfIqH?A땄;F5Xg/h[DO\3Yg +qX uEBBT C<4vԳz:ޘh2I7 "7X/x<Dq &-1ϒwS13: 89Z?# ΀-c>ZfCe$LZ_1f}#3 5fWjbήԡ82t- ܈Ț:H'o0|HmvқS(JT긎 "lL-b5'"8ߪ,f]t5$mPT`8p©N`ƒf_ZՂ(DWX5-th-M+DIYGWHW*M[DWXSB-M+D)tGWHW2DWXU׶aM+DiiGWHW%oŒm9 ]qէ;%_]姛?ʼnȞ=‹"_hE^D3wdIiIUAI櫎 ޾V{i_LgUCI% ~Yb`*<#|.2]5xx3]ʩxV -s)D'|PhaY C1??f6sfw:}j7|ԛO<9 #-Ko$A0ٞ5"--Z-Ze J:- CPEt%m]\%ZrF4eg$]YFf-+,`K.BwBWG:CƅMK>z6F(i]!uj~oGW =e1"m]!ZCNWJca] +j[CWUt(3km]`"B̶h<]!J!ҕ=htj ]!\-BW֨\ pJr)l׮tNWWt(5] ])nn SW;-th;hxwBvtut[ kaX9Isd$Jo>Y-(iľ{?'0~8p‡:K,<ک1K(Әg0gP-xM RGh֔űuE$˧P5З!R*AYEbYuĮC) 1uBr#/UB15ԕ6[R*e#^JiNIDZ5 l4x_fX)8R$ftr{.҂h-Zd`Ӣ0(Zc!6C :D6m`Jtk  Z.NWRtAHWV([DWl!\Ӛ s@Hà%<]56kg ^snkn3aQt]m;TRJh kt(I.px#-the+DopKl]aP0*M+Di;:D M=tpik+DU Qwut%T$ nk jtBvtutKzv&TT {D1*PW٤4654pmkvD4(hi*T;Iްg4LTIm$Z)mXV[lx{|8WȶVRu E +B 䬣+Umr9[iOtw5.gDrFKA]ؚCώF?F-wtfpŞsnV퉮6CC6+նCOZ+ptfm]`Nyk B6] ]q#i>?{WF] 1[Ln0H> <_9F'S$Mim%ʝ96`TyֹV>~;7S+ڇsJb9ҕ ֮p:tF@^] FHW6*:Xx2t5)=v(HWαuGlF3i:c}׎֔vGV|JEO)= ?U94=PF+4i::{g?w v-UvD8f=b0JOhUZo7obty?_wF^j{my9GB1/|1K)_Fލ6ǷhuXެWq7[6M("η6ߕR9_vo1C؞_OKٔޟKuuO}n7v3fO܋mkǑ|~;wu/GGy"En8 E >2ķ⯁ ݴemZ/Zxo&mY-iz(Y5Y:8VQfGB"W(B㛷nˇ #>.>A{~:Ary۴?[>5鞨(M ={6^,;]eC6jewQjQQ44V5fOBJY۴֧.ugXtK0d5ԗ;qAtvV[W"\G{7ogN]eiɠC$Z)o-A(V1Jk`D2F3jWJ=)|n-6pl z%$T,nmY*"M6@&J6omdRSLj9b{t,&3k mq}ùk3d)KNI9_}{"&}xoN4G@s.Ghv~Sl4#tEt(41T`&rih*˩|/dC- ïUh}GQ"^WU4$w綾HmU!2Zڋ#Q1'B&64b|jŇ$W-wtM%!D:Ǡs )'o}o50'HwSt-!v&mI!a##k$3FVh\.`5 j*q/Pd6WѓvzU5`1Ϛ S>4Zu)jzk,0sSb0f;DrNU.Y{u=5B }m.5SGގ`fH/3~ E U) rNg,a{PQAQxi`VsܦGD~0P&ܪCZ]]/ 噩tXBs+):+ Ņ5 ƺz{EK052PUg)`HJƬC6C=0*T.JvdZ2ɷVqzҥLyA19vx X-VVJ%L2c!]AА$ sC @L)J9CijT2TSr58NH&`dy=X "[܁pAnTzCZq2FN[@( iN !AYP6{i\7}n-k*j{]`AZI|txḰK<*}tcU2&{H\!1QUF2bx蘊'8$;Z %tF\8gP4I"i Y Y0#ic偱P&>yt_ 5PbP\F#H܎m }[_Ϫd!T?V y E*Uh@6T QH#}6~}?>wy4P詌 uKе 2"hwPSv b:EjC$*KtPK|̡cLG螹ێa4( X4kD*Z$v 䁀::AZx _3X3D[vVhň޲㎠%߳yF΀`X!Xcg? F#tf% Biv%&HTyP*Z(o*"㬮Zf5BeB A1)'JlZbMB:kϢ;4IyT5P47uTjj5VU*s i ߬n$#H6F}+lmՎ6< cع9:'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q:6KO 08ֺwzN eqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8enw/VvwsnC@Kkf~QV/`ꔌK1.v'c\ZGo\JŸK =Gg"]5dj'h]2lL/OgN0g5Gr 1/7?_]ncVw3:}fq?pu,Ee9.vξcʚɑ_QLq!J#l!) (Ūf͡Vvƶt)("CAQk2*ZmW^ep]`Ppugִ=]#]DX+ctg՝QWiJeDWUOKTlݡS[u㏶C0tJӲGz =] ]`P3tU ]UNᮧ#+IsCt%)3tUΨtE JIZt0pp ]UEg;Vtut:DWѝ v&hQ%q@OWGHW] +`wZzuUQة`(w*\әR lE&=Q*̹R*TZ([nϓQ.dseMqʒ'03o X6uY'AٔH1gX\i'\k;;* x:® /rɫ$q'k"fDVfZWX8SGЧ(r7)}˘]%"&Wò~߾-vSC`Efɴ≡͢sCƺۍ*g;J]w:+YEZ?4}lv.ڬmwAU:'ru'֧*J =]!]Q[ץqة xW誢骢Un,NWF<0]m C`]mZFW]AOWoz à|E ]ڽݝ*Ji{:BP7l]I\Uvִ*J'z:BR`]`!pCWm骢l-=] ]i@.\> =:v(P>n}vִ*Jc+S^u{9SXe91۔-vIU`34MpQtE{ۡD14]K}?)x\Z0e?I|-EϹf]`8 TM*U֬Oi񂪗-2ǓǗS]oK{)7]Y}ч" RĬh,X!`2ѤRuR.|!0_R Uي}~U% >ϷV=] ~8ӦKt ^F~9]ǼO6N!a~|t=cm=zᗭ?~\չ_S=]]bX2ƕ-6V^չ_q'6>?;Cg'd!$Pʨ1%^%2fPwj*C[hd:~ӇU#sRHQ1YZ3Vfty* 0BAJ1!9St94GQ ҼYi2B#%dҕ셵RB4ʓS}=I&?ox<+ӧ-\gkz^_~]]C0THs*Uֿ{Ͼ +SwU8;N/x~Ѷī~Z^)V7"ם2Q5"<GIN'h#s? yz4 @ 컀>'RЃ.f)ӿVgqzO냨YM x~脪sӥd(kܐnK:̃2?_,21T\?v,vkmME;q̳o#ɡ;`vqr=Ks8?F:Oowz} t"z'6!}6ULRIk"IL>]MQȅ*i$eϓʦe@fW eh0X U^k S&4gO yt2jڄT39ܿ>އ_>ƟGտ&*OЍgxɋbL'nZVY56=;giD`F띊\f\uk׷;ӎI{rQv3gh@ZQΨpLgXP3#EWx(<Im*Yyr ClLJHɞg ')C:JY<6e)iKK"5P`VI4/<Hj,^ eY༜e#NZ<-eվ26ͳd+r|?~/;7ȝw6aH4IMF 5(Nz<0SHwFw~%ſ,<8{/y!ɠx@瞊]wҷ}|s(ajtjVgK0=EQMuD$/L{XA2^g0}TKT{G5nQM+9So i$/QJrEN$F @HG>'^-¢E`( Q$\,2KKkRiZ5eaڤm ic-8e^j7ӏgؔŞ3͋IV{]O79th6VEwfc\:3jXgcl,[7n'*g}<$Ng59$JY0WfxAb8}Ӿ~ʁg!rQI Wuҧтd[bEk"(Ak#A+14?H^&r14cɬG("F"rh\b`:|] hGJ1 'i< 2JFKs":s?:Sv7۫|=7ai^gFe;S&%#my%?)r(@$ɠ٩X$H9C$ b.)"HcHkHA<'J:18/%/H].sku9kW{qS^͂hgTmBlT/`T뭈aVgҫsG d<RL|نlU**$)0 <0=32v"'ݰ4W= msR CP4Byy*\BWTJL\q@s/x MzvK([uM܃u! W<9भR\C_q{qсԑ ~:KzrDChP[;Lg`f@&k')t qt4)EC4zWb.$<_'D0RxzlGb>?֞\4Y<8}ZZ' d DG-Y4_pmA Pz<\TuQ9)XsEP ޥT;"DE ;Hcpuvܸw}Ir(er^`I`# rȕVβgiy{`v<kCqF))3doLd"W$tHZckDեl?Xj{ݸS[nl{u eC+~V:gklWVhh%~#\Y+T/tlia~Ȩ`UpM,ĝa`t)%s2#{ZԵ|q&պřtgҭXZg! O m98TUclǮ18{۴n%tg]zrJRZ fp^uy\bkqT\;$fکa )D-e/Yr+(0Rٻ6WAXxlc}J\S$AV ID5%â(N:r}syQ Vt7չPɎv7vdWdt=tWAI;÷T2wlm)YLWXjdENGPDZH!*JSm0I Ĭ(Gk#'Q„+Ffd j0 *r#c6rMWɆ$cW,ԙPXW,Sm2M񊒴Ū$f?5qb˙_7(M7u>#!R`Fpyͣ3LxĜ;f9Dx%9aק*!{D%$ؤB:`#/#vct`ZD"rR<͈m}<&;ʹ9P{4-j)> aVµ2/cI}P`J@W٘SaY1 $a:$`  D9#`|`#FudllʨOx(M>vEDq-"xMrUjQcV,9cqUH"҄ILƳ4X 6Q^]d$&:łKs`I]H.w\B6rK0pq\6:IɮH2"iqQ@E1Q $8d>T!˰qJ2`yŽXΙcW<{W؊zCIȍGq ~ĕ'k7G{ 'Y5QG ,LhWZ#IɤFvRx묳us2ֹ:D Yꢑ8|F]:A 橷T<)1E6 Ӻ9vsf7n,?Ztfu ou>}AZƩg:QO q1xsBZWϟZu; ۬72z]Uв]{=/o ڢ-h0XL@gFr 5=7tM:iewzMңhtJ˻e^yJ  bW[k?m1'4̯4gvf ͭ玚PGL Z,-AE }$(\6Cf#TK$2E<ǚ!BsW 0s0JB0_9&z[fp~bXnh<9ˤ$!sw"˘zsaͥ݊:RBrIdU"DafXTqL4$$C]' NX= m2uzʝ&ZXKRjMD ˕`H"FQg6_ 26XfM'6F%]V~BK;aR Q!Q$)k(MиN{^j]\"?_=h.yE/}u4N>Z`@ڔA#L9s)(ԝ8\x xyzl~+VE-6T5!8!Rr9lAtYS,IQOs|70>pUP~ T@p 8rxrx^^I1S $Sm n9٣iƤTGNaH*UI[Դ.i{ ~.N.^TΖ,^- Q?T6__ɪmewRt6dm),`"TL\L+߻1zg;Yp+A:{Uw]J`8mI7 >\z:E >';p,uN/ug~}p_~ׯzuz}o~swg|}+Xuٮ$?'` c}*S[_Wq]dZ퇧mMq}_W~:_w\lя`=|Gf=p٭b[ZO݈69y}uL}ݥ D"ʏt f|+gZ6 ;Nb rQ QPZAcD#*ԈxK b*= \j!n:x94&eN1 .0Ӡ?CY/$P`V\'˹8LC 0&V jL|uvUƐ3Կw VC^3d#QpK&/@R md.F>l,6.[{j_=ăAE/aNY0hG{9q>`i&h0hǠIpEʼe|"X 3G" ,QiK-a$EV Qҙ5lli4Z)hǤ@M1p2k~dN[Na%u]zy_XsOOCYHt h1HXɍ3A*q`4P4RD"-",7bC  ѦQGXHT8F>*YKp3"ا`.j^pn; b΂wDrǐз޶;Ґ~&.s|Va i`C?xrsO# Qhq3 +)J6EHX] Ѹ4%pC`RGTRJmɌ6>ifaL|/tw@!vVyxGq'}\_:D)n>"c;. S/4\ J؞7tx,wl[Ϭt2-BA)~4l/]\<[m&ҧjd ?ԮU л-NrމO5ZygSK[gN' pԠrigD *AmzL& r2{z˔.,;yunl,$"r줴;E5`Ĝ_xyu J ]%b]j^T/_5R/'o4+r0~ v@i: #'~8j/`Q)odeQ,ݥ gՒ}y?p:ߛ] څ& `*$kfhv f-õ&PUaCvo돪o\,;lfW,|g25v9EMpuR4׃hNp4M.u/}T>U"L?)c %SKXmDɢ7ҁ)d\b?%`< Zr8$;ђ? KF)J$Z );彋*2a5ڦ IimhuZ}@[,ɾ}XàLjb wd+UTsc%bDT'Eq w7+W }=6?We1Q篠ev9s=P+%-MjRڪTϤR)8yv1}ḟK:`>ZNaTspC*u4peX*Q+ġURp%:"GW@BGWZm#G_n9," pl>,n/ŀxDZbUeGZV,V%KKz,>,vYWԕK){ jt%iϮuEФ+I7EWB:?\WBiu 8i0(ުѕѢ+ _jJU \θrN:'g;]Л^OН?'& W5-;qN`E<\y˝/}qϛj:tn9-ܟEm:owroOMC |lhem!Q?8}qI1OW^\w9z-d}}ݞFh?pGkh m·Dl\#vjUmn8!I0Ȃ*8(gl&g^F*}lƔɘyl6Yc*C$5b\k] -N(|nJEdMZtŴt] YWߌh˪caCj0CjJ>kOߤq;EFf]Z(;cJphѕu%κL+&ؤFW] מp] e0&+L Eb`pN[-C8GI4jBsLw_\Zt%Ε+8jBg("]1ptzI6@̺z]gQ8X=ש /~0(8' `@kS0RܸoDVq_ |ne/_me-vG̰lwK_J΢jo+j&IQ\\R3HhS7`?,#:5]r`3(ܘi-Y(|CJ銁ѕע+u%1̺ftxl<7O|𥾣pxv8Z8Г Qf8BWqծU8$5G ]1mS҆YWԕGQӵ+֫ѕ:5ٕz,]WBYڃTf]=R+v] m )=Ь *ISvJp;BKJ(<bE/i1 _¬N;(#ʝ ӨE ߨg\ +n4/|)A2QQo+^|E mBI)g%J]P+jB~$&ZM%8ɮׁѢ+gWBf]}3UՃ96)5H`tHpؽGhlcg]R@@T+FW]1W_p] %YWԕ3Mb`+jt%_{J(]쬫 v58] .E-KSYWtQ j+ugWBҬ *S+Nz.0Ɛ] u%ͺB ( H'>*Ąd7ES`0ٴY>WJ`Ӽ@; cmgdi-pqmnx!jnsj6Wq0Mj" VLIυojfLk >4Jyh6Yï |ݤFxO8ςPp^] n4Zt%ILIfշ+زXC6GǃgWpa06h082ٲ 0BW0jתkItY5\Jh1+ +k:P+^7]1WYWԕDL]1n4jB{I(K9Qt]10:=׮] -J(#κyHWzt%A͵+=nR#)< >yEb`DP+sgPhc׮2$uEfx庼ݤpa쉥ĘB2q ߈r? ^h P2"|%QQ蹡/&ZK2a2&eD/"] jt%^ } _rJ2&+[TtINA@Iη+ec#ϖ9F8\:ڹq1FW(Saw]YWV=x ?Lѕ h2ҚYWԕE x28'bd] uJ(quA4d`kQס] ]0j򈞬"]10Opբ+ KוP:?jA85ܔJ T J'P+N.]W\jP+$7j"em'E 'ޟ CG"n\2)`A܌*L/fsq%e7%Ҡ\K ߛfTFUԘJCJ.qϾ1 E3Q3e_@}z}v[+G+.cm?SpGdd*&edll+r ]&tM hGv'_E*뮤/ɊYN_-]WZ]_>/\ݝm#?aIJzQÿ\2>-IY`eG d_ {Q+,Çk~Y@pi, _ye, a9-rtNתG,{B SU N80,k(~}P9vt嶬zwl`:rQ:{ h op#tf]Z@6}pFWNBH&+K>0ax0] n -ڈJ(K{6ǬGѕQ+vz7jѕ"2ͺ<4eW L=nҢ+J(K{ U+ח5C]R^^KX+~mzuy+>:{çqۛ8(&^-o?sL\5p\~_}ne|6!o)ni}M9~cl9z=n;Y^A}G=6)RlYp5 ! 3sQEG>Z*Rmo?k_s%]wzsGI+d}O5L6͗黹}"峳Usu־/D}јU|{ wϖWopWyw B蟽{whw8`|w>nYx}{ [|*Uv[wv󄫻uqSE~ˡx[SorӥO)^1WO~]Xt8k]p'ñeC;,3pXE psi|/T-qq :t9!Ͽ$y_2AjM骾Xfz-*:;g1v-蚌 B|pjr]B yU$|wuH#ڛL1|ky̮.M-vjctI]ET) E|*lM5LTNr!4"Svpf"jl89>ߑiBw;N6u\H铽[h΅i!=677"ȴUmrێ=gZJ ׶\ %%5PG |sG )Ӈu!&sƹ[r\[-e6ym%ZJOX] 1$ښysW + Mk1P5I@L7r{nbEoj>ޚ΄PN]dav [nΎYQEȄl5ȅv:]pxQrTq0k2qYo9S6F!y6>a=UܷN WvX΢R[dεF9\Gv3'ha9WmqbB#?,d,Oy<'MB]mا{d\|j=놼>USѷe/6 ̣r7${æ܆΢mP:"5MnΠ|R>w&X~_bO؎{Y#6θrgc_3)8CÄ;f nK',ˊ}#WM䮾5sg+2k8"P8[3 .r~3zצ3!NC@7sb[W Y-gMvLL-'\eڷ:I#) GHp>Dx-$t>% IАf%SdJW}HT*#h`!&S`~72-]8g4D.H2ӪAU &deZP>:%$xGe@Yi)nuG-="q nɂN5~D}%TU;+Q~fېMR V Dv*,}?XAoݷSj;ӏ]@,M$C.2moYe>bM=A!%XB>hAjzq< J.p_>УLP}ֽ׀R"Pe*2 ڐD4J r[O κ$!;V@@]Zx _3>H[&:7-;hﳱyp!(tg@ @N&e#kv0qAsh`:u 35)JL T G#qȈ"kU"P4,,{ sMβ@A26%Vm[ 7n+i-liVYIDeoZM/-z,~/XPh6P*t'`F~W!уL iF8ro,AsOVQA,iJ M\ѓF*ek*LC` l) _{3LH1`ˡ蘅`VRp2 ~ׅK:(jI-( _ y(O0]g튆 358 N}UQ~ԋB3۵y:],kÚIe[:䵡CΞ 47ա*lCrv_4;ۡ}S/;g/aCW;5k&e}3YRѴ8f#tqA+k|:uzbxwt]MŬ8};/Pbq9?;=ʟg3-+^-bN%-.Žiqn[+ap6P7b:u,F@wRZ)R{ϔ<3Gh..֫'`Xt*bT+J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%п %>"%h@HhՓ?m@8>G%(@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JWꘔ@ yDJ h@@kœWJV=K%լb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%IǤ˜x@k('JY @J@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X |@/Gvpz^^0Tay}]\ZPwiΗGez5YwEp9wEhgW@`w ݕq1*DaW7cqW9Z.y?讬W`'Ѹ+++B<"stWΫ(n3)u 'J_}z) u<7Mp>7Mh?stӑzDb:N&b\y+vsԯˋezio-fZW\ >| xhFUQ~T4Ҡ2(VoeɹUSѢv[ӕT-!dNC:Y^tZܙ]S;:((G9a{={`k&ә S(:S_l x:+Qԕߡ>^/FKP˟v9z[6fءYNv{+}ḽVxuiqP'Ʈ=ܕewwzgrbjoOVGi6'ć84;S|;pJ_]4~Ifbr3>.;loTrq97'(1&W }>.Û*Mo jsä@_QKWbz:BB*& Rʽq7`"70ߎ_ioөVNawVvZ/Wzihx?5N *?I~ϯoᨣvlvw5Spl׾ڟ|5ޗl7~(~~1-O=uA]Ld PChE{ twVG wQgUf(-?n(펨׽lMNO78ǯw졇Ԛ " ckpRZRQӤERU>S)٩gCHl.=%cAⳁ͗ ~H-7Ϸ,^'i}ߧ35 Kk C'!΄l"'`3^* Kpχ4j>;:S2;o?z>)!EWf ,;m錴=}w}I8tHϓ%测Q(>l?~zfy:o/GfY.mM/oMo DX~J:{Ӑ<7Db'bɟoL$zdofk59wP)IRŋ׭$R:^-{mg b(@ Dl8Kfzq7]o.|uS)t{ԿUٻq'^SxOk/z}~7p}yhvuHX2Q7M`1/=umOww˻o.9RE *|skwv+k{75G%oo,J޹f{M7tq]vÿn UZT|cr5 $:j6i1z<#jMBLs]7 {zh[} * kt9::_ jo#:.HNE7}VF#hVlIxB)%,S]mP{ZJGC,&@nk\m"W.We_S]sǩ7fk:ٵٖT,/!ͮMHƥCHA/e{Hqեn/zZ{OWbBrUsIQAK%b)^lT`/ל}lP٘ sr˘C(wlrq5 oD1kv(iC4-~sƩ[Gw8 wXC0,۠ sֻkj[j֙,6Kt%Al3"?{WƑ / H}EVN'M jkdxoPIe`Sg8]SS.Pe `E x'pv Ovm Bow^8 O§dtl%4Oh>X*gsIΐ.Vw}D2|``E'X(%")o)f)z>I匢s(7#%,mV`Uo1! x ]t Hkjc?ޅ7l ~kPHs+K}xYpS")1"@k^h-pt+S7Sd^g[,wN\5# D wy`'봓kuntNYAԝIa=ڬxᨬ2IAZK.$JQ]ϚUe+ٺw2 3 qFƩ't.p>Hg;W.M$ÉKc~GM49JE:YtXz:hYnCypP)jcI&.Dc :dJ/,&] K+8Tc΁jh$26HɃ% ERyєZtƲcB3{e=v[ZRL,(PP<2HP 9'mԉ:Ǒ JO:WUn2%WLA-.}sꖡswTޱmvco>n]j}jC mzV*ŷvtko^:w QXbP^- ڨp,1cySyf3_/\*pIpl6x15{ y wg0jp]6vrݛCG/(s-ˆXMtP+Eз#3aAX$)ĹxŴcG,jPck4^Dy׌IkC"EGsQ/JzM1E0I]FR2$*A#֠A1`&($ctA;Ws;N}N="d`<D,""-C7cc^!9X%#!R MyE!%!R[֧@xyF-X8FE5z,ȯ҉" Q{rߕCuӒ]qEb}# L$'#IuA3Ń yd ^&.s.!/|7<ܜQLR4mJ(Q˯39w ѱw[Si]&ȃq*0J*cUtNFH]ïCd$籕 M&.58`2`{M1tF4D%8EY0&t7e85>K<L%ʬbHD{[gˤՂټgbW{ɈI9PA%aښjq#cƔs$1YL'ͻ#^׿q_\;끜.M-a#%zGBTd x_6j?ĀUXY-EaNenڧwaVw+o]VlTD2_PܟiO^'"aW p> bZn1G~ _iT~R6;5n.j(YnYY=\\U>&O z5% pTUڞz?bN薿/Gpz eZ#;ɷיD.~]=>>(_[Ev"`DJ"P72eI\D %4 4宋z15Y 3|y71St?k +O},1}xeX JDbļ.8&Z Q@އcua/uU:`x`To˛G։L^8޺'a1Ej 1 Dslt-g3%CC<b,[ C1qnC9^ 2pENu؁glLѪm`Mޮj xP. 3Nő8Xɒ,@xPyyM OOg!ϧ<,DCT UH/hidyӢ%2L ZhcwSy2]wat>FGoGTogЗ1FHDA# yHd*i%)``R1Y}1?kcN-qd6DJ{vD,ėJx6/8'69Mr)|>J Bt\yC/].E(ESwS9]ϠzQ0 uҊ:9fCcS6ye3i׿xqՏX~<~럻/.Ɯ__¼ q}im>=J*^OӟI}Ar!? {f)@ اӞ2>l)ѫ|//+KE?~l k Bx 9R׉b{OPX VWAMV*'abZN9ioq0Zs觟_g/?|ӽoh>~5`ڼ7\Cߵuȟޯ5r>TNK%пJ۳)~=/&A YEpmJU. r ץ|,הזk !=B oqJճ^RF˟~(tsaf,̿n09US7;v<%. ܶޕxSS)7Շb4 g?E7t iٞEh]ݫ)$j]zхWF[yR& V{,DڳnW[*]\ ωo{Lghvw֝a+,DHJ)5RrD'xӜM>]DždtIxH#FR9H"zHZ#tJePڵb!g(+jρo!8T.%B G7r8煌=,GR*250Aջ*}ͲWF(ƓBSHVfaj2t#@VJP3upMոDrI4ĉNѠY-aVkc{Z5,W/QI. gK:1Ib01dPފd E$hpGV]x@ gz8_!o]e ؇`A{meI!8Ee0䙡X3U}urUguu {,86%94Tv7gT-A+qن}n# !*@`J nI^GSo<Ƿͼe"`l|M+X>^^ti?o}fҒ Qǎvޣ2*>!ި77zXFHI6;r>;c׽ ZU7{%c~ZbbXLݨC-Be2[dY1l08='~vaKrWSH~XMv00Ƌ<{+{H"|ҁqU(R"bj( m|tCW ]5xoDĂ-6Lťl!ZVdxYa rmtJPwlVeU.{13Q*6Τ&L|!=QԱX$TH٠0'mYŞ57qq1CǗӺ&p{dEo=ƹi^101K U̒/&V$bq'Q%CU'SأξZ]5jkE<0=H7M:Y$$ek@[Pؙ 1!̺:TY%W)O+Up,*@ 5UA6* s5IlJ& ՛8cO݊pg| pm0ڔ}܇~~ʲb| >y lOuY\y&9̱KbSoW3}ɦoLˀ3w3:niDh+v3U4Yv\g(d]0b4Z | 2KA5b- !XC: u(i]G=Q=y]> [M?^-ߝYmr n/M U/t}@UH!*EK3UqW5\YYs6Tl;8ERƋ#J`.A:` &'wwlז TL,ZVPYBD-TȞ*T)L]@~U%oSYL"Jtf*@&BFޗ8e5Y|d3wڦ4Iβ)"^\m/N/ۯ>>W>^W?%Ze4Zf!9f9u5k54_1Cp6ʧl c)ƽ%HZ(K# i&Ůg]M'f} AIS ה,,RtRJV.T+kd@Ȉؔ/BMCuu۱\"LYcmZBI& yV9$m5JFe!&G踤Zf |ȃ7:&SKXj9`meU5ۅN`I眕ʵ;"ܚq ~ 6m`!Bqv;t`wJ ;ɱ*pgY;lM1񂯬Dh P AĔk9Fsml^t"j-/1ZlJpF~ʉBGBtNf7)G05|Zgvល_6l/m81X{^ Zvwu:::БyYnuW.:$&;z@IP[KLΤPp@IAI~WնڦV`?]X?t b.a2G9\$]l"v\DketKoהּn"rr}}*?]ʧ2:l^GޗTk>.I*GOC>ȺW?7]|tf4K_+:Kc,I{hݐ|__5b~Fn|<.{ށΠKy?RTpVt'~;W5Z ?x8&4$dV!QAu0>71Ě/QS BXIC۰ # z 6q|8`8vNZ?`_Ṁ/&Sb<;Lg]jzJzU&Z@ȉ(}'ƚQ&w/ m eؕv77-2=в`*r ?`lyOWȑ_Wygr?g ڹ\z=vfI=!p2L0&-±3&% L%25GƓ?!s%#9s݃u\5ijR 4W@jq6b\5) 4W>X gX,nQtv;#;{H[o?t~)_n?H(3oyVobnzzv>Q5[ItM#Nj^//,y$`GQ^̧E7ny";rE[ZfnFgWWr- Nէ)UT72'2N h>ѤQF2e@uP:ԹuCP:ԹuavA0u\:סus\:סus\:` V3 cCP86 c/8$.>_L^Hs.>`PU2TYudLξZ]5j) (6G>x+v;8рӀ@UN5j@!W ;ʀ1!̺:glg{Ab *0dҵĹ}܇~7_(rʲGپ<jw9ߝMsc7?(6Uvy%8.l:VTJJ 8s78Ӫƙ[ɷSD;™[?*P;36> ,P8X_Q3FC!DӡkȔ5U'c#%vX裋֞qfoū="$zIrNyu;>?Qb}׷S7)Պ(2 5Q(զњƘ!8 {SBбmb낼5gRvI$d-ϥIA4b׳&㺲)YxYHQd3809k\rW2+('ǁM)_~gʁniB!eZr"(0e5T$ЄUDCoYuQ:.)֫Z%W,˜EÀ[QUeU5ۅN9+k *:vD:6.U2 ~ 6m`CM`'ӴT֊MJCش,ͺ=t/ٓ 4tuRB"Ӑџ|;<9[2r:<<:_Tʾ5[&juRV5QJBH~l/|ƚ:؆Ŕ bJ@l)T"W+B NHJI*$D)Ĉ9}9* WH @Vm6$k"=겱"}Fx J2x}{O OH)kw̔Ƴ%kousijQ>DDB&`%X1o!p|Rl"ʤȨN!$7 'BQ< JI8;J@Aid,Fu~*Űg싅0  /)6_dxCbcybPp~8xZؠ "ZQ@j#e2Q)AxI9S$-& Qs*)rÓeƞ pg6ҠW"btBێȳª1ЦtR܍~< WLCAbܱ'j_-]-sܡj"Q+5cPqEet2D%=CMWSʝ5@I*IZȨ&\ xԈ5H'(#bd Fub܍Q캜P싈0";DuiQ5-9A`, QiA,r@mYJm(5hR<UTThВFTP:з9;G|qqVuHb\/.¸:\pqm  Ƃ"H"` C:Lq$ꈚIr.!/|?<|,<^{gq ;O6auAp]E?rͭzG+>L 9:G7nuז| VPCAb?EXp V%ft]N I8h%O8* f ׀:5aC% ڢy%QFwGw8,?t8"Z]wH? cԚ"1hQ 9;)r3PRġ5tڣv;GC砝5jU4; 5aSH" 8` .7J"i񽒿JIr7ul68;.Aƒaע.g}Xz ŐH&VR<;j;齪-y0鷡@?e!75~x:8~߿p.w;3~`Qrh^WqGo墊htܞ"Q J!* 'ru= m}LZ#"8goCP*tLW qh + hA(G3rGMy!F @ Q.: etVTs13j[\g  Uϸ C)gf'uO52[b5Q@VJ@v0\S.BOU"W3qSt0ȇ-ȡekb5-峗ITTpa8Ө䎛L&H& H ZTb5K^r[8ģUlsox/~7gWY*\(qwel6Z)mNNRxea[*ܶѺڲxVwHL/^Ttv;vKg>j %嫊f-N\Wb:^ uERFR}tKuK4U!.^z7gYlٯ~&A[E-"47Wxa9r/fEݺggŪgZ{8_!ewՏ2`\:uI/ӤBRqzdQϑHYc$<3dWTU:]5/gbvv}t :TQTk jфP"e:1{TǗ{'{AL$7a~C0̢2I!LHl~Kc]ti:tTz>jbBX zXZ;{up Ko׫OK_`寖w~ ǫV/juwpOo~~yh| ?{˃ktZ{9ϋǫV9h30mּE_N`q 4ɨt@avFH5EݰmIޫ`깖պq $P+ܷ^P@.Lw]t),˒C/eG$2ɁRDVsThtT>z9k[u>v@k6imQnHѤ`_9iuXlUk7ۘ^<髙zOTcG5hQM &%_!(ZHAz-k]II`$I#mTZ4ʢ"u0IDQ2heJ3qF>!$Bڥ+;X3OJuSfz#,Z9l=.l>@)4s$S&3mhJg$QR3gj8N=(μD3$K83rKٜ2$|P< +Pv$ @f_EF;|cY2d&xv2fi& mQi Lf ,%([]:ƙsD GrF'+w?j}&7he-bWdll)lByU2m"ph{D:h4TƁ٩6^d J[=&,Zj0ɨ0f6QdV:= d)[J,,C41H&:0*%)OYuبu>{ﯱ ʅnCm+FD"cQT+Wwy—X,] ٲ,]h5v*F34NlO6&7RT@ʉ-zdQ ѮBǢ YSQH8z#R'z [{ּ2mC[ Ё ɰr+pώ?I:P#Z7oEFqՓ䟁!$˳ QIIitX,21F",h!ƚ>g )FwڛqpߕԵBܖr>)O:FJ"8`MuK`JN` *7%dA*BEgDibEۜeO%ʵ+#$6XAǔhL\{]Mԟ(.X5ڐ'`|T "J %s =1%|cٞuDI\f,Ķx:Šup:[*pDc[ M=eĐׇ1NBlş_R ;-Rr+_L<-GQ}C|2ӨϬ9PRH-88rVkAC}: ~86{ijUI%a! 6@m4. `R\#$l͟x@phjZR_ף$dʃ0ǖz> NBH#z_l]^j_+ҍ&WY9ڔ~;☧nԢαupO˧+li.mo|s:?^^xs]" ˲ḌGNjո3 yqwDR[CgbQ>zisa8sխ:m}5͐^ZHmذm2MOgʠ2/L;cKK-Ŀ_6/nO7߾|῿7߽;7~óΟxj "{ϛ piCaZ{vhuߌbIMᕆFayA^}g4 + cXoϺgުiM{K5Ie킳vӇܺn@`[ *Ԗᗂ#y.˕1y4q8I=hoEXàf  cj^Y P310J|ۣ+ҴyIG2uP9%ʆ9D;Aүt3=wker 9AܘigJ|Qcm&ױgߺ;Pi؈SoJ*R^oJE_o֛VTTlr9EprJӀcqd㧲ўC9ܲAS %-.QE 2 *Ƅ\02 z{޴s?%k,ri"cdM0& ]ADWRQH$YcrM^&t!xFrc@텰ޕcv1]1C@0M3N7;@g,yT U%eKq(w"ƑrO;mT"&}Ht0'hIMWkg<6Ek$`pL=+-,%+db9bx4hUJZ[fx+eP :`΢e.u *xYܪ&H)Ӄ+!l2Ot@$`МaZVn_`Yk%ʦ#*)dFRTDci2qT}mW%Wŧ쬘]ʉwaR?@9E k_*` 2h!fH h6HFZ{{99˽K`5*W7TbKKY(N}d>5P&iŧ0.f.m`G&/ak=ټO>0$*nջowݝ%VyCo¢^yHFӼYr~]msRF{ͶgZOĴBdgZ\>i~7n<,#|aU!H ǥїXZBHT@=:P:!sC:WNTք W9G V$6,굳Dkjc40)󞺼5 AHRVa ;cƌL"^ˈip`BZMM!KEE؟^3x x<'Ԧ:?iu*j_5XiR"(yL>jmdd&Zf]S]QQxqz!,,I(uaaR"2/ll_tZZ> xy:+a'g/u@ݚs/[osat?>ږAݤZL՝~,׳S|X@)K +B9*JSm09@e+ %ZM@H1 . ,䊑9ZE\nd&E(qlXmf싅:3 KҽqcFrQ~&F/ȈgyQ/G^ 1Y)EɨD3 :)Ҕ !){DS#IItY<#/#vct`ZD"rR.Km'cM jg{9P{=j?+HX*p Xlɖ} nqPm%VUE.qVq@CS6G9 ('8=+~#JudllZOa*M:v͏}gFD#bs[\]MG'$*nJe҄ILƳR yZadVopz ",iP΁& rQ#1sĺ܎,U..:͒}qdEbubHp p~ Ba$1_e`yŃbw4Lc_<eb}U_Y8ȝ*tÁ&Dn4i(jz'~C{?jvyv5LjTjh):{7H:ʦ^qsSnA( Ar EDO}* HF G!4c3!!nws܇cNꀡF/!t %6Jnx"ט"Y` +[8/ *5T25)Z`{\ aI֪:ڼTSؠN4$mNwtj|@IXR.Y0i,c-a:>O.o )V:,*/a&Oŕ (`NU9O.zv|(]ix™5|ZVM¾|D隸eAZBZj#\eè\kj@*x Xy{l~n| kiܜƾeрoz\Et\du> JȍUXu6$fsש׏ x\L4кezs0jK6[Ѭ;%\Xwvܳ;|0L69Q=2wr<ηtlte[w!'rj.ntC66i$XĊ|W[ڞ& 2y8L0+gӋQ`^RFH(򥖜N)bxЄ㾏.co6 7ҧoӷlM߲RitdZl=z"J$*;i774z>FC怦mP|-m6^|9x |W6='hDǀDGb1q&H5= 03J#A!v`u)H ꨷K ^ǣr2q*)!Jz)}V`E*L<)8k.J8,NJ7d1g;JI$30dg;0=FH/+`linuo5v@. %J'l_`;)GF)(fvb96EB Hx&ٙg-!q^!yaeK$WTf*:ɤuKȓy]7n#ѯz`{F.o!,{#y Q ;eKx ϶;Y3^7Ś5+A&ԽHQnnh ]L|)6\h]sp1DOm="` k8I ƠcJ棷^yr=o8ӊ'd\.AmmAm5NB^B^W;B먺+7LIPto,91x]Gu̧iA' 5~m2|wTk Tfw O)$8+{\4b.>ѸK.%1iJ1t~|Q^4@A_拉 i&!u(70}UU`eвtR&O&U"-J@0Gzr|\65޾A l,oEzW?66_iܫd:a[4yX>]ۀ|aAY)/4Z J¸Þ҇M'r@ӌKX:2KOq٢}X{ד~ŷ7=kr#@u Ʈ;c n+tsy׳:/a?TMMudr 1vShLl}{Sp/-Ô00ʳЪ]kl9LxdmT@)Q)1VB 0"چVP%f񷩧L|Wb{Z 6R[ Asl>lO=zZRqK8dcG?.O`uV=w&Zפ(t5%::-VP,Zly쵲]e &UA}0hxTQ͍QmQ'QoF#AWvQjε%6^y*+zuC'=Fz?L}z2 ҍzix}a(/̧_&0q5~$F.ä$J3XdQ,}Ka Yg۽Z%FMY፴=ZIµU+mzolXpA^ ׮J6Tcâśӯh:W~g/`W6|f6Exµ!uUS1UY=ZƦ BT Euȿ'Ɲ0V3Md'3fpR6'\{K&(m4ֹ@ q $}dlHu}/Y[rISk s{;(9#BT̂rdh t'n1EdPO8ajNxz=W[ydY 5@!kMi%0d}E㹵_Y`ɳ͉=ɑNğ;|֍j]ۉֱF46"XnR#(ɱ؈ wFRjz3q̙GW 0h*KuW+UX$W -p ܳ+ɸ@GM~ Gx_tirY͌Oko6wJǿ SŴ͜[xMC^bR­2r@K°RTr|[ԾBl8oӽ祭Ùq*' ;9n ȥwXdE\*6m WVq5Z-5;nylޮYA7됟9[/=W{Tu}IaA_wc#,gySM1xͷ;|wwc[ _yd[_ߜPoMZ͟M٠]|Go*(QD\}yjzԘY11wΘ .jΤqo?WF}#{_w W5%.PT]N{Զv]mGWQ$]?ɠR7kWFi2JuI]w-|k8-|3ʝcF_xW~T]JWߏ?g/:}/%<7I t%! jC7ƭ?ptMa\7Z((el1H t̀po9]W /9oQW9ؑI DW+ҋ6>2v]}5Jg6}v!_^W,]Յ/ϢK=`%4CWiCBDʀ<\ͽh{]WO+|HWAѕnte)]WFeu$:ҕ܍ WR/2kוQuA]IT#],]JWY2ڼ+P6`O) v?`?kWhů]WF **E)~AGWFm(e6unFq|xO<$ OnFQ(I[yR (x ('5z'5+hg'5kѓ1f5g8zޜ'uT6 8q7f%7-[{nf{n,)󞖒 XB72{ѕƼv]wmnQWY&s|R/2ڴoPoztl|=y]K7 7ڜG)+ ]]Wmz4#]ѕrEWF+וQuA]q>te9w+^teLkוQzj);tedpUzѕ.u,JrjRL B72~A]jO<v]mPW!یڑ}?ɠntez]e]WU菞洲=1tEzx{uNޫg'(wZ7w[co9IVp/eDS/Qц_!;"{(CsӒg׍\72Z%gjʎ"]p'2j^2K~5Řsݵs?Oj_ZWp/{Amp˜x02u-9;_Wu'uÅ'ѕREWFue+;\x]Ste]n&2ZMkוQ,u4Ĵ3y K.<6AOŰj>sGt+gheѕQ}j zeG]pY2ZOkՁ2ڠ"ԑ 8s7npDWFK㮫-Jc:ڊ(`s}$©(s#=r+ ~^)T@;h:9affߌYptcWx>˼ԣmong"g]?53`} JO͌2 [L͐iW Nu:nV֯^WF)u9 %p)+]=F)]P_̦k睤ŏx eI+q]ͣ\]P4CWMOp`׍ IGhCX2]W{ ֮+]zj׵(ז z]ySN 9+MDWޫ[A0x!]mZ2kW[ԕ퇗+gpY2@kוQ*ڢTpZqRY}W:oFsz9N Ⓜi^}Ub^*D*e-GFG[2{]J. \np@׮+Pڎr`=EW&p%+_r6J/=3]{pp<\%]W[UYSOpZly וQruA]EH=]4`ѕ.}#<֮宫m*y׺I(>gzҵHFʧ(vrZ`byNPpixNϬ)9=\t|c'KsT5]P&'53~mkO@nOͶ%f]c` ]NҕfY@\uA]D$=`v].w%huwAeޣGW̦.yZ\ǥu5W~h<ڴQgzhS )KGϸhe2ʵVIt) yFEWFy]Wԕ;ҕkFW];Zhu%':ҕK?2ܥ Σͺv]9#HWR7.Q7ɠъ_2uKIWѕЋvfQ}jz񻦏|FB ~z7^5>W0hʇ }~ww¾z~^jѩE}}^e=s뽏B߷X:^W^z[ˇ2j Dȱ_W׷w?p7ooY_%W>%I__>+_N|:3rn?8kO`Oyp/~Hy/gyV~FWRߎ 3;sP/+#hĸH~j91q uy,\5_LF 5 z7u v5򶾻Y!>r+,iJx!Mbc 4@q j SLbJIo޿O|CbN'LCQŘIQUI]&L## 5jt0Nb'hBl};q׋vHT m WChUK Q僧: DZkclMXS6 U+ -8`*Vu餺(E.%RDSc,i 1򆖆S D %'L TO)Ke=nj>0 HLby:LÜ95U"kE~ҔZNSN23SԐ ꍭ_hkq@t˶b6IYX*a8!2_#}<꫟h7WyU޽bۏ}T08Vz| ~ &4b)eX # oxPpxJȈ/;"G[,g!]-qRZ-acF<)b4lP$_rQXECA( `*Li`QA5tiXQe=`= K*0EQ} ѭVx+B 3CDlQ~G,©>&n}.&mC69*Zt3KawmYuK~0EYlI 5IH9q{-JԶM;l6OuݺuNuK~TX] 5("Z_b =N >\.-Š:".h /룫#D ]J !voKr VhGc;fEgOpl *N6GC`U#mA94j,ZnkT@u:HDڠ xd!AL@i4P?=HD-#;hwEdo !ܠ4̹+ 1"xሳXeB z9|*i5kNor XFH}xq,T#5@Ј7T*[fy/z+A-хoP[@ 3. ffKz C:[:xE=Hc>yy[:-.,L5b>+ۜk2 -%2xMW-d0L&Sas`*mÿkF7ZGuc Iwy.j 灣˪im2PΧO~y6G _6F zxSA9njz( Z-5D6Jr).z*d y U@%5@>N(0j@=-`5T߰^úec4BĿz5_@'bʅs :Pb 94AyMje NPdQ)U,JCG51⓪֠aý5uCEe?@ śZ#AKSy$k '|,~33A#-+!-t9 +hX4S"q;ƇJFx>xj2CzJ%b٦\3jR [2z"Bi0i3h̆bjƪRD40IJK1w"JI-bJ8 $"p8uAbɡ;<Z ?Pw r^BרvE"B Tv"#DoC#zWͬwsq|\M냈:uеf]3lPlvSA'0͠j@w;<}PygtM`Ǐ'[67|3[Q<+pzv ikדUN7kE.P!⻉2kO/ҫrq~vƅ=e{vh ˗-ڕޖnSoC~e[/,'oe:[N97Ml Ohm jHm(G*Q9Npx@mfrN tF4.Zg=ʽq`u8Har Pd"E(2@ Pd"E(2@ Pd"E(2@ Pd"E(2@ Pd"E(2@ Pd"EkrL(51@F@o4i/&s4@6v[ˋixb}Oa>/Ϙ!zOV91 dmo}1gPzfQ@_k-NpKk@s:n[o,fs^L/./imhZ~ΩV49a֦R@S)yw?I&!d\R$r<%b4%՚R:2.=U 3;t5t,hx|j(t ӕWWcJW IW9Kjh?OW t>)L8^g{?Z?mZ&]?2vV=R35t(]5ʍ%]5{j(t ӕh3وUǓW?P 骡T3LWRI#JW-Mjph=]5PzJ՘Ʈ`GWь]\ ( 13fD骉+'GWJ;tT>z1ごO299v&]5%]5ӕ1 z3,ӕJ~9@jdYM֡ܵ8?z@M>ɿ33?\͗S3 M<\S͹0RYߛqe}Oa]Λd׳|Y90? _'(N}y3Ἱ"6>7eyu]Hٳ..ͯxDM<%@|mqҫ^w/߲Q'o5tV-G7B%]5{j(͆x[+Qj 23t]5gC4fC|=JbSf5O ?{uuZD:bǕ=ҕtЪo873\0L/~/ orw[O2t}nӶFן zvj΍_ Win% ݾW쬶2]܊㏀6Y)Q*i!b*kaƅF?;0{}b8[ϿV-=P7_!>X38Q Ddw?w[ `w`\оOߔi7뜕m)+#Zj h]69x7vEI`gf3R )mǷ,B]W;~TLm^myu\㾭l13EeTlԷk &Hm-?Ն;H]E5]ui6?&F`dv6qǻ- -au֟un\(\H7mh}Rr٢g TM:M'пt*E*&<ǨTҪ1Uhw윶Ȥi;/#$KW'e\[&!vjM h08w11$0tH[ 2/J̢}-_ C' JHЦ͝ Q'> >ywyA[̧m?[ @u:/v8<ժ\GDTm{rTTnDt{*,ó@V8yH@!g%+ e4q6h[SU)S^8!8 D_@YceÄmѵ_oWSc[/69܋Sn)Vg?[~%^:yIoRNjcA(&:3Y}̍ͬɱZk(Y}qVboP'f8N[8]PAEDr/%F^HY,qA>,~eqbA䐙*Wkr6B{xq .dI\+5EKkj dA̡0uMQH:*(˙Z-y:>-o.7Zu<o.8/@q,Na}lP8;'0;!1$F^CN}{@|HفJ=J_dw)/u8ݞѹ[h$y}0}t>A`^3ZYTW%$XWNYLAygbg܍%fm3|&X/9AbN-2xaɂ;s7LE q9^w{ ({k"oxaWwSԛo3֛/߾^SfWжmL R?9gHkkmYk>h[a&&P RYԚ"(.&&& chٳn+ R3o)0AyD"%Ιg&eL(q]vR@6qŁnB1,M:L*,4p⮶% m":a)Um z޲m^ـ U][zcjc:JԢU -Lժ4'*kixL)%R%jcGƮJj+b~O֞Z;Ķ-A4DÁ;a>vQ6Y1uZCڶ6uJ1Bٻ6$Uw8ԏꗁ.$6"bu %)>V%CQP Y4{jzU]H.CXHR`&|%O13ƒ)L] S};@I*#!+ 2W\!m%VRGq+H4ףzëVzц[Dl->g?D(g1JW҄(Kd:g d1tS+DkKr8 tSĖťKxfy}vkh_H8] nq4ѸgОJ:ikG߽i{ͻh|3ZkXH>muf(a;ƓƶzM#  jb{ҭ9$;{}j~2NV>%KyVy͗mP)J`Po_ۡ5`P6#-1^|q={ka?cyq19:Եl{LH˂% {1 \+^J?Q L{ _cpJ\^_ʧ9EE6 LOWvϬV\'^S*!*묔{4UG/.("ݟ<%uKGb>OA=-bJ2vo=VYXf1{Cp,%g299X&-A#c8V}LGb)jɺ{G&!C)P xn%{%9Gs K*p.` t$ *[I+^~KE!!*c]GwuEÂ'uPY8pX_(NM~'[:U *oq-mhj6|5Z^׉'gqhN9LOҢRսSpde,dc9 X1b*DH6|֑'f3dLT3jkrJ9VB)K.H`,U \]974s`]k!s3ҙ.3Յc]z]xP]< &g:>ܰiGo^f@/.7ç^; Rh2Xr*;cp2W*BP\ X6Ka;*t[z I0MM4$Fd9bfخKwE;Gi<w쩵 1v-k F嬧(a7zP ҽ[*VxJUB&Ky(ѐ!:(lIG>dTc}9ڨ_-X4bgFkDk^#nx',u*,9IlUL0e4'2BLuk;6Ftd@gB jH1%MtIo8Ēu١w9ҫ\EzQz׋m#d!f`(0Z"#"#zBiQoH{xxC>< [FT |/ZFQhk?Od́4 *x#*0U.Um 1> sr% *sr)}EQ9h-0|Yh2bQS)T:ãN4CokG,H=!@TyI k,kH2Y1\PQ_(PVi&U㏣j (Q'lu^^b3Oe)8mgMvkfPyTɭqmAˏ魪cq=h+K[_iiWT|1kO]f`IFC?b6ۧMt9fbf=!h[̫ۖO>Oa浑a2ooqu{3;yLltU֞zKY%k.1t盧]T]yJ'҄‫0$FBà '3KSIHD/H ^!&rI4Z#hSƑ Z <`:JN!8N9%O({Ҳj?8,/= {'wuW"5.aĻЊ'=㛺f i' *0U=L7}sհR\2_sf{Im6'kN*hZ>R(N%c8eK&N{2/錔 ^j~m i1GDc)XLdFr"7}ƳyE89O~0›Cu?|HD+I^.nS$[=]LON薋7=0.moMVP0i>z /5vS\|upeŵp/ϛnkb׳ns?Z.-N;|˶ۈZg7$ GZG#xty=ٶzWumwm{V%\-8q!#Ɗ$xO ==V&U|ˤxodQsyWĎ/o~xO?o;ޜqa^7?}GΟhfKuVlXX>3K[/?_nN8^+}JҰlEGi_!. Χh]roAϻ5-n--pk ]> v[ѫTKb? Y4 wۣaֈ&e'I r;I$OfPLh% 1i"aj&g+#tH(Vɮ Rt8򑀎_jɄhmNrLiH Aوf"ܟtr3ձ v{ 2m|{s:Lmֹ%hgCw'tGjiaBwiۅxƨp\U`T6Py * }DA<i9Q֓2hd!lʹk \:9P w=.gM +v~AK.'(=琂V hk6Mkyt{, gcX睴iJ,)dmTѤm頝X UX7{ CӔvB$빋>q tL(.`R@/, 6'}3q2dWYTQK H[&~ `#.$d}!h4z + UNw5̎[ =;1nөcvS~-# d:tKV.!Ut9g+dxͣ[ ׊ݷ-׷b)ˑ3 V (嚄b)f͠A1#FF䠤 S'@HCIAP6$Y :J61YCgc9;hɞ?}>" Շa ==d2M9j6Gw6y)ŝՠF9zDں,:m8m x:Ax!K12ѩsY,$sv'a AB+/HF7+Ⱦ qg<,K5c=gXÔy)-^FE7ݯrhXo,źۮdj`f+\.*17)J X+G2RʆR}rK|~Sz7r |p=\^ n#]+u1/ْl~벟 uvK65nr˫|M#qRq͈E,\Vl| ^&/kZ˔BRGZ9`R NEoS|Ypv>r:1Qw.a &30&1[scvzgd8(8>j\9QRtʛ`cM[>8=sh:*\(NR/!:-ZP` *[#k0ȜeGe8*u.D~{o* Ix@z#@Gf53"GKɥd'؉U*rgq#8x5fU>$W{ͦn9 @22%;~!EJ%z gzƯѯiyA=O?|0jw?0*͕rrQaSFdVu5r.Lo̒  |-YX);n׭.$>_AW>wvaկƣO-ޣ3?uWXMh9?sv~:zMg~a狂؋{|IF̍{+RY%߿sGz#Z!4GS^&罨LDpY5V6'5ghsHY*[I(2`(WXrʞ n&tMq;?J>'isМlu*_'-O:=P: f`&%IbX+y9U$[Zyng:hh70(),%aMPDiUB.f!N68OӐ7K4Q4?`6~Nwõ^C<=+ H&b Ag8cknt"(> ֽU9^^aeyP<(;TZ3hyt AU B듞SH& 0j֧N3hJ07?TgGO^I2w l'"JY ERU!]T)d5[M}'ס/840tQ_&nW''ԇo[ 1gM`$fX-ĚuzEǴe ].]5j|R1j]Q6LѸT4P"Ř &zKv.]JNgg]|e& Z^~Wx5ABע.Xn7 58W6x6Z-Gg͡s7qmǣ_x sSΎ۟fg}'DJQ?2G9"˜K;RVm\?G'=&56GqlƴaHU4Ɋ֯t>lۮ,=)ze> CiW`{.2w崌~rw vS}A/-B3d^3Gߟv#~}h1Mk5^qG^Xhx_]c|qbv:nE$ώ ȟ>l} )IhZnlh:ao/?YgZf2,+ڶG=4?'=mջF(7؍S&XIZ^#Χja` 현y5vEKR\/h|t~=_7]1Rn$}"M>Rkhy}Iw.؀|&aݢ Zvbwވcb؅v߷+XG2f] lrU*Dl;O-F .,J 0D)>?_OO./@v vÜcȮ+rX/ &<jX G(Ɓ%olYKj:Yz ^Wp,r;d=?RֳNr:{~p1+D(EdSBrQC ̓>**p>%4xUIgSlbg  [dׁj-^YϼCbK۸YHy6׍*Dʍ`kLEF^+V?r6Cܨ0TT~S Fp'Z OE]5j;tuըA]DuV՗@«ܛugʢuGwWd$oȨbHXѿI7zw_N)#d\~$X CRzߛ! OHM3 ljTtֆCWӍJ54h˲+)* ^'%)=Vh@xAxWb)i惠ٺ7?֑I%`Qy"8#AYeѥJ։%B:%=>V2zq}5=Vy&!#u:bƂ*eg$ V#%J#Kdm:cMbK; k(Z[`<}ʢܙ}s699k3ǥ {H; 4.Od~vm(_R/*N"Z'g(ZjKIIggy8s+v3hG8s+ZǙYFMdPwDNJ!j^l0 + ɻZR0o(Q#)lb`ա)ȘU(JNSXuk\!hYș3썜h!Uۡ˳'+w}¾zzs'w N|6@>%j]>^ wWo4S~X>,RJj d/{) 0v OfLLĞ@zg  c"ʀ(c٠VFU*12E ~ʊOYz[(Frѣ,LJV*% *7d41.,UU"nܙͻx2;m|>TF$Ru~˺2ӫN\w2bŭ*[ K KA7@l [S1&AwUk]xT"]U,Ei-XϳfM mb C2ҌIF<4$۳F=k~AIR X8fjR(S(K2,蜕WIc$、BwH M~bB!UXNJcHyVؾ$-;T>Ądi5ƊW |Lw aQ\u!Za5:$5@(R L% %UUZ:US>eTY\yӒ[&ÀC^]m Av#mvݟPUA&تSTlԂ?JCUXU0HWigl7Yy#kVH%D\+ X XcsHJStdg`2wEB鵢@ے A)UddjRVo9! eƮ#<1)71{#&7GNkd0]VOɈU*\p|-i~@D3.QTyĄ[" gʧRUj|u(dO u 8m8\GMPǷ䬊׊*es펞Ah` n_.E-ZEO9@H"րFZb%eDm\& % C~ EQ&cp01qz g!]) !*T Xh[#g&I@s-ak܍RP5iˈH;DqHR:!9X%#! 2)( KH DԶ[C QZn Rp^Q 2@Q4ŝ"0vm}G\:$[gk\/.quv Ƣ"HA0HFqxA]L HՀIA9!o~xLty5Y?ʓE\ F>g&-9:(T)xu=Tx_|'((]B\r¹Ɂ#$d-@TyCY*d "⿛b#SR.55 {dR~{\YNCQn>1f7a_nsr#&{iC1FܐBt Y;vݞ"r+i\ʷfR]Vkw{r'mN튭Kl]n]֭w9vf!,Z{ۦVϫwwdߡk-d4D9{ޅwr<:u#?-9E]svy)dR$Y>JZzmK 2p%ΈCelCJH`$X>[)R[mS%$$($HE)HcLf7.^{ h%X~urG7M~%$x 9H-bC$-ii$RMa^uk.$ڶ?jW10) LW4 N$c \V(8$yhX UN:`u.Wź eǽ$W%b2ʱ.Pe `E x'pvIؿKg~pes;/9̧dtJ$h>I\$ȹ$Y]H{4@5\IV `0z<^NQh"iBǤ=hYn>@1KOKKCޑZUhAM Lb>&l 8Y1* &HR95/Q^ ӱ ^=L/nW*KWEWzy䯄a3͕D5 d aPNјi9MpWG l\[5V)`=uRuH6\IXpBᴩvL>d9;ߌ\XbV_ HY™}xwv\w<('Wr*L.(l0= ;!5+fyk59]R}RܸuQ75|[.|17.!iYqA켞[nWs{@6v>Q:w`RTfu#J6$.6 67ufy 1pTaT>2oΰGb1ٶ|㨌luqFtki6b%GOR- rnT!OXw֗#;oޝ~_8O)e:} u30ӽn$A>w'`TN?,QZOo/Ӻhϻg3 OQ_8/a,+H?2E]?-Bi[ow"0Cx; mDwݰ.6oyøx\jp+1(C' Cvokܔ&'GՓII8%O8 ( f ej"%JE-*y%QFw'78,VG ԻKcR"%F¢ĢК&1 @ ԝr3||(ႭġK#/;GYMg5bU\w,z8H<=`8)^|a4 e&U=E3'7y-~MNE?.Fl#T8.EArbp~{èeq]T z`뢢LQ QZg&x*__\-5 zpu+izgsz;'y9-]ڪ@J6,BD+U~YmEy]`W:X3OEpTztw>RV>Grcղ u<]`eJݺc|[PՖ/ۈRKH(6"d5Ky=Ugu[YA͒8@(*sJF"% vrրR2$.R;I@.lFL%ʬbN0p5@-+j](q#pUmJd7cb} ~Cb`8-mc܉'Pm6}.دgOxeX ):g)_DKh-( o 1V2/7lC (UstGx{ubrA&x:ItLFGc@H\D" 7Z33%uy*Yr9qkZ#n`;XA^NQi`w{[ 妆gxY.rO?s%a1Ýuư8 2P')^!O92t&AFsC!*J*PoDjor]\?DqM&-1H;iy 'Fg#AΠ.c>R&CVrm &- ݪomd<,j-6]6, EԽ5E2݉lI^jn/4>wCvBFbK#rK$qĤϘpVGUFM>u0~+V59MrqGiPD+o:Z?7aJ wZ=}X^ݓ[NrZqhV9Lp`lbIŏi~7clOcSקwmfxkɭ1Ѩbe7 %)f@߅* PU㴔z+qq!CE?8}Wwɬ~k߻L{}=z{zwX1U^5`oݸ꣟\ȏYqs_"Һʼ\ Qˀs:O{_I;v$˴>O eK:P3@0)W齯8xMQc5X]'7%>[FK#]d Whå|e7E~UG|woǽNK\Kʡ~3td4?)u<+dƅkS*sE1'o-Oa=(<[dLclxduݖO;a ǩ>eb(?2 y7[tIYju#^Dm2{)~-ަ,Ok6ig5yv޼ӮҞhJYa43JQA8޾RwGaۏ¸PxrK$JqcvNH`zHZ#t?ŧoCic?v@=S ,4* !yTA-w2j樦SyzC:+*Akgڃdm=`{ڢ\xX+|($ `$ &I,XnJ Jyn!:)ǧJ+]fDkh޵Fre$`,RҭV#z_1H3Rkg*~_jȄN&4Ý΄kؖ27}fV:QVKmݫRlD(("Nh4F9n#pB7KջXu^3R9O ,4X5Lz,3Z'/Il6FRj QעB2͔ 9ܛZ9S-w}!j׺9whܿ¿vgfq!Z/L2HIU.SG]FH^qٌ4ns0ϗp2׽oʍ8j=W2r)ͱ m1 }t%,*k5TQTۡ2PAC d)b l6o${>w|mL=ȻId.@;P0kSYnU/dͫh?$+@T$,.fgt.D.6ѾpH}w'.,g+fsj (Tl$e6.(9>{0"*{#&XIVLR)"̬6uz< o4ȞZt,)j:86yTIDބfE dbA(Nl6'-};QA-nAz!{/N1sUiPuIfG4`_HcWTVhQw@*Q< `nna2ztZsMM%dYk2 Q`5MC@o)"4%H[BBuj"n+6|mX KϧGji8f|]]wO v?{}_g_*mU.f+>Uq &%d6Qk-K^RN>E$ٱ+;3C˳KA e!FR)zvYvYN$ 2rʾօ~t ,$|ATj{I$O: $w\;ykGvux=zV4 Pڬ [2zpAzi5&SPR,*[ "cn 24X@QJdhR`;R"%R&YXJ7~ goܚdz&uqnoMyW.Ob`-} S\~bVz0o R]iP?pp0T/pSs8C|ikuVwOv]:&%^4X/zɬ,Y( BgBzE,Son>P?e16j*[yk>ȫafs>]^{۰v~Pt43?~>:9[`gGil:7o~ϣfo^۾rQN݈gd%*_.OF K=2W,noUW}1WUZv\U)[4W Q}/%>.&NQtOA D]"u9Dm`o֦?vOh/aS y2TXjn8?0ݏRtv1!Op0p.HlICpL5lz<anyY]/xxqosrwcwaGO?x(Ixj{'M|p3<7FWR:,,|c4Y[M_hu#σ7֞ :N/#ڍK޽zcn~m>u y^'`S8D(C|Q! >D(CG֢ @Z4/-usovM( vG5 d&.J0`Vn"vEE@0_GډkMiB IJ-i(H@uiY.4=1m{mK_g|_?UHFzr~W2v9+^9kԻRo=2b@3 x䠻Uoi@xOAx4[dLC\""f> *tɄAdpLXrZ;Jt+a8]6^ce'e4{e/<۝\\]xS7󌭆wh |Bzk02;UZ`,.P$=(o]נO5._|m޴1{{h-v7/oPXoVNguLp'=f vח] Quhy b5ܬ`"]73E 2F k,^kre%NI #mTZ4L]uÂ0IDQ2heJk`Lu+K 7CHk<3J5 3ݔxLo%>ʿ}@%9aB؀Oy/7:^DDJd/:*AQ<=nBNꗒ= Yʨ=bҊ_q^"u@%2ƕD~ܠ@ JeMiWShoSMϰv|   `Tqޗ#UZowK G`%r7;WluNSf_F{ `b\{œcyۡZ6-H$-H8(8KdAdIfM&Mt6\mS$az<IIȨZBb ,%2Y]fph~;B#B$ֲ€JKႇQPQ,{,FyLƺmAQn" ]ݹlpF@HB9{/LU$E 3)ٺG+qǝn8+ὶUOJ6[WcqjܼEXozj_4k kP^UZMf)JZ٢*8PDȦ"lLxe!g 1Ad }dKIF4 h+y`d4žqMPM "fD:0*%)WY506*|ܠ'=`o~iniB!.FؔEUUV{u9蕫I)!k+!TZC҅^-)7֫~bt풎ϩ\D5P-$J#M3kYl)%!R)NAvH&9īi߂C[x3hp;0e~3>BSgѬZJ@.fh7iPeuVY 2"P*Bg ^B"dXLP525 dCK/GL فRD$+`[v}rB}T^(q@G/[hϽb;(HmNIӁ'|.A6Avm5HjI7mIb0l1/) [7ZF1oQ##<Ӹ_4xQY 1r <y@D( d1 j !Qm b".DQ Ty L|LDLD5Lr4VfRWеMnk,J q[y*bMg/g޹3cs4l-lI'茇$MWK~c3(Ϻ_ UȐ ֱ@uC|UNէTڇ^R[Kjʹ{ 7ϧkti H념[6~اϧ|pWhʇ.,yM_ϣoQDxk%RJΣچhŀTxsN`NT;a$񼔥cNڐ0EJ bڅYtjQs9) Ĝ)ߛR%@"tEBp?jo&]w]*e _S\ߙHXkEP0_-b>PTsPDaVQ26JP0eSߍQ*ɔ!8 96 Zonm%&FyE}x͓ =QI 4v6|=3%մ^{}w99KM=Turju4hjkY"gCޠEbm#&ҦĜaecYH2e&"hT9V(\@p:HiaJ-c3qP8Vif ͌Mm!68­•Df+2^q~xƌiZ$^> pz:rzb'K tJd BSyR"I#ɿ2ݽۑ `KAlscqH9^ARIQP1,J4A(JF%rd;c ¼]'EhdOP& f yAC"ms7Ƕ_Q1kF{r0G:HcZ懭s7?,rb[}9"n##vqk+scBDŭB"ɐ&Lb$6$!Ǻ]^+ t%#1S, A94Evm }/,X:&[gkT/_$-EŎ/n׍",F)b )2YDPP$ˣ_<_<͹%ؗҖ!ݏ-^m(񡘅$pʍtz ~<ޏ Lȁ5;cڂà rk$ə(.\*:L7cuzhR KnA( Ar =>͌X"z+B0i+fCB渏 2M4TjK7=;`,1O;F kL Ȭ`,+:?!Dɤ[eoPa< IT꼘SPI^Nbi:n Ч$jQ),gt110o}_]AP&7{ Œ,Mt* |}@>(+ؖ.z(;{[|g (]Tʶj 3j q Q$[*EiƠjt$Pet1W|E H^' ـyy =u$(7]vVUgp9Mzש{\˨i^0>w2*?#M_n$ )D,kǮ@nH1l&$f{שw h?.&msjWhCru^=sTvp`cZв6y;r;=L-5O'Yy;˫6~ߘ@ l45n6i$Ʈ@g1v,mjtail?!_& ;˞x l˜|}ʟ<>ye7G?O~xc~L+Վk,ԡKҳ^ezOdx`Jкu涕B-ol l74zb^uQuaVuޫX^i44 <ͪl>.9g/_U|jҌ*9Ynd\ 6'Pc7N+FcdzO/=dq Mۖs빣&@&gdQcC0uH>#Y~C9BH"#^$s(DJpݏUi8PIy]juzʝ&Z8KRyMD ˕`H"FQ#B]+,eYmX"_mJ$x[n#l4N)HrtݍWtXkEV2H!B Ny% 6xm 4X4CqL8=1 .F., {'iKÙ21Iβ(Y%<̲Fb$_cmƽKf(&ƅ~=mjh&rqxTuQ*G<|yg]$3u 8+P 0EdNΥSwb"\ᇓbL$ u^?k~+VE-6V5!W!Zp9l ~](f |;ߏ>3(/ ‡lbJFRc 0yZ;]NU)f*tet6,NN`ȦuSӌɪs9i9)t_R%0y]S:x[~*.^ gkXT{ipp~Q%nW $@j2o 3Z wFYiZ7hHr0d0tV lDFG%f<~h],\f&Q >j3Mn+.QZ40f3p [ҳ/FAQjRtk䧠/ q6g pYYE.6E?-v_afhC+k2e\7qqݥ/Yb?A5=;s]D'1I9(E( 1djD{f op`7- ɬiT2y7H-pO +{umTrWLYHt h1H Xɍ3A*q`4PA5RD"G8GH\q:`u)wAQo-20GhrB)!Jz)}Kp3"اܠ.ja,J8lNJ7d1g;JIHˌ5pf #msᴤWHb7gn}o%wGo(Flj|FWyc3)GF)(fNbS%;l8Ees9eU3xمXFq?!+$<\8l qiU Lq6],䯣N2i5Ē8i$qݸa+cs<]6 ty뽷y2G8b4$J1T{,#\c Ct[G`v4ܬE {vW ! Cwu/mn`I&;-Km0Fu?!s=Ҏ``K %q $" >#h Z(&Uر񽂿VEcs-f9o *F6XQ#%=8N+f4JOb9;m]w68 y -z}|_}nbpy”+ NP6Kƕq5e/|C ^?'*zߏaMɿ?ޥ5ާ]}v&zoqՃW{)10:!8ᕝ}ƓfR1ƥe=j:Rjs`)"<,#&'NA'Zz46DQg5\@Q7 Ck>{[\E֟>{<(2h;)'U9ΑTP%AE/@^|Zb|H/Ǘɧi0_W)KK\L_ޅǽz05ȿRlbw nahAY[a/ %Da|hO[t<<};:f<\1橢{ } >'o-q4=( |hćrC NS{x>n.?z4a2&P6Ӱ_ ϲ+0sI1ͦPYbQYeoTԂj>3m7ZpT~=Jz?$gۺi auٿlRYJdؼF-LyNehMzt0do 6JԨa)׃<xyyB,n.gTmn֫XmiQ.Uh鄄)cvfe]2 >7oۇ)M b). Bm\#-^8LLWnx`M\TBb$Πm=&UTdRHءc{Np sxg_ct\lLxdmTwfUޙo솒$3F>yvݪ<`vM8޴/ko׽uܛiMwU-+fq%- :ZTu` 0GM,.jnDh:v榏~tr :2R+50[*3NL*޵$O cQ;I6 nY,jIymo=Ç(Yԃl4Ʈ, 95]_=+kt{&~ނYk2 Q`uفRD$+xmr4fs<ۉ3ߟ[iP`;)Ws/V.D%6 q` L0&|$|ɼDT4t~M} W^^8z۰qZ>׻OCkhg~Y7X hFş<Ό;ǟ:Js~֞qTUZ[AkҗoF<G/Eew˓|-דx荬 +J<sUŅ1WUZ+\U)W]sj̕N:"s:},报5R*?h|v5Ȯfb>>qqn;'d'v9`0?_ɼEÈm1wRuʡsR]zF xmF>_> ˠG 5VƟ/ OU0= )ѭ$oW-xlҁ9.BH6Y*e%yI: %#bV!3BJ:CLD˜2 K)O$ M 3FMK *G%hiDFCFkr*$ofTVc2u,F@DjpBiJ1(E 2d g4))1X|R S$+UI"4fM`!V=E5MV *hckKl0(K]F5W/yدAG}{=7 ;fYuvyUgϴ ^ WCHh;KA.:QgIDƫj|6&Y/q) mm1R^$eZ (+0)0H Ic$1aY ,I%AOE,TZf< n[CBHBZH>x7yVi?]IA7Rw/A4}.oa2>]}ETXHtT@&R$cwC = Yʨ=bҊ_q^"u@%ve 1Yg+dP1 @(;&(-twB@YEem՝ύʇs"g("i=}>zݾEgoqyOϦWo3Bwe~dgu`iCURV0epnp37o8s#v37=L"( $*Ye%}A^3&&iH:v.{ $y<IIȨ !Q"ե1l&8sn]է[y[WN_I1ϊLoh=`}nzq.z\B ޢ"_vnF-՘u$;;@m E쌲!Fy\0(@lB#B$ֲ€JKႇQPQ,{,FyLZv(H|7Ӯ` jP&P #SIxFLJ!Jqg: .2dL) Jq" :v@:6ɩd>tUNڸ:UpZiGD_hY\GJ̡W)a J[4~{n茏jb9*-rC0x6]>tтoR 58Y~:Hfe"P*Bg ^>wECa1yj@Bᔤ`LMYx}HY0!,:fK);D%>6b6&7z5]h|aVv9N+ L퀴WFZ>鷳@U>wlk}9HKP$KRG ъe89>8r&8 r&8r&FM2KYJZ=d S,](UH5sٚK̙")Qd BW. :0.!f<2~>$&^@z<Iۄ3ұREP0_-b>PW1 砈\# ]VY26JP0eSߍQ*ɔ!8 s m4>.nm%ÊO'}M#<"}ۡD͓ #૨E\mo|3%մ{Owĕʷr~ot1t4hjkY"gCޠEbm#&ҦĜaecYH2e&"hT9+\TXR LVZ4J֖8iUf3cS[m!pp)Q}8;X|0cm/~ p~>r~\l%@I:g%h)L$FjeIډdۅ4m5Tg/:FU5ںd!Z F vcmI|ybb͎ *mv )`_ uQ[R޳q$Wݎc8lg9O1E8Vσ$EQCǰeq]]U]U]@pCz, v`[c< ds=E.qVq@C1 d tx<0G:Hc:懝0?,rb;Ïm9"##qk+ScBDŭB"ɐ&Lb$6$!Ǻ[^+ t%#1S, A94Ev }/lX5q퓭3,ٖ/"bF # G1P,I"c(ſQ//vlinwiZ pYZ+OVnNp y?jڱfO;ps4px0]n$9K#[gսo ޛT6[ҽc4JjC9C0S()(({Lb&8 nH D#A@u F/!t Sq$shbـj o(p BJLi"P (^b*Ӌ0IL=N=jr>5>Q(K3S:#i6^o$(Vϣ >0cE8e&\GBIzQNs>> )xU=},ESC\Or…UɁ}'  HW-"ɴc@Q_r$Py1ף|I H^G ـyy =u$(w55 rmzػͣ|NneTjsZ~w ňx*(ْBtβ6; |ZЊTʷfOLRkwzFbSBZOa9G۬7 vjBZ6ͻ]z^4~:3=/\xMrs7rEut\ty~;^hZmy9d\~\i-m/ciX"vj+bS={z 9eҤlC8>$Xޣ[-W!DRG#p!QTc͐G!R9+e$B9 3:K<}I^l˭:+)F2CIn!D)m&Dr7Qe pݚ .Cu;"#4:Ȳ̰4N+ʊ $*c< arc}ĺ `=[jF@-%&"J0B fe (ܣ|fK@Vj,3֦P2.I+`mHT$DdL'n i+f֊2 dB@NI J0)m4^;AhhG8!0<*]wYb1r0[79F[G%h$Fm+.HH|;U+f|p6,X5 T F &g~]?ٛw0Qgo7p V`\J 2?o S:7RzpfPTE;? ~ ljG>W~@˪l\E',˽6ߑ%ġafhC+k4d\[LŸҗ,D Й7 a-M\EXH1тFQ4`p(y ݱB8ָoCiSf oc8-quq2kڭN> 1ӁŠZ{M%7|~D@X#1:|8F՞dE8}Gi{pF1`u)wAQo-20GhIJ)!Jz)}Kp3"اܠ.ja,J8'%tYYhR:1c zss4I]lݭvR`t=I[3ΛL9"E6JG1p;(qa1.ݏ)3.|zZYOyh8 `Lls ;`≀Fu?!s=Ҏ``K %q $" fDa-*s6ӖzE;MLVaBcVq~4{7*i_QȆcgWOc-i)p\Gu<),:u~?䝓8AVMh40O& EVyhff7<4f* hK0|=Lz΀oLoXr§ygoFul(*4'ezhn61thְ' emm55'g?x f J B<cX 铫 3 S&چVB$g.:~'E 3ʉ ^?`1q/m@TaY KyW:Í-#,uQ%-c.⥽Ss>Mnt@")NһŊY\ DV6U3X`"k"QKKES+&9Nx룲${AW6QjzfKef@۫0/HE/It0Ϋ\Og@ sMrϙ`"AHZ#b>Ki HϽV]AŷéMw2'E3Gx\P Y6QY0^<& UNk&OdeݬRF{s<@$L8@kJS-It!C7xcD(ϭe=vB{N~׫jVHݽBJҜΕfS`AǞ_"UeWzvĀQ|6 cڗcgmI 9(y@r;QubV2z\߽{ȷm}7}?ZL׼Qv2$U"I8FoH2"R~@AJLEPPR4AKžWMP"X3$< v8vU]%h wvgW_ R] aW \r0;JP՗ɮExq.iFwq#G*";ڮ~.~콉&\+4hݽ 8mk,r4l-k᯻]/6|^oVݸ>VGoe^.Aڛҷi[ן&LWy&%lΝr[5_~l ykqMd|sDW}~ɢ0O`- WмNEHb5)b`c չdBpYP5e#2漌L䊁|6r%"WBar%8Qi6#b`\ \Jh\ %IŽ ,#Y:#䟕~LC( {NryP`lJpI"WLk\ %Lr5F➕\ ȕz\ -˕PLr5B2 Q(\ sȕІw$W#+=ƮT>r%:ΠК0tJ7eWc+RN;ؕl:B4tJIF(W}ϹJ3z3(r+uC+SgprZ̤ԥKc`(=M޶ff1JR)kuOf]?\.+L0g/̮Ⱥ'yBҠ=u:pa Ps2gh-yJ) T1bH~Nr@`R[ldٓXvFQLwԾ7o=fBiPcq_A1pEj\ ȷPz7PV&岑+l^ ơ˕Pf?Lr\QGӅB^tƧ;wg?svՏ4k% L\$W0h䊁ȕ:E\kbzQZIF(W !'b&C+5dWBJ(Mr5B2Vɕf#WE˕P)\YUHξq?\ȕ)IF(Wl`57(\r+ W\Qb+ؚ=0S|TE&ofs@ u̵ڟirwFtw]"ӐS>"Ƶ*!g=~vrc!%)Nt:\ n>),Or5" #bC+?Zg.WB$W_\F](p?\ُ֜CzRagTwZ_;>BFr%Hȕ"WL ]RIF(W!d$W ]>rŸK}{Ҟ~OJO\P^QFrAlJpM6r%gH'jre- ɮ׆\Jh+-lLIF$W^9h%mP˿m3;~s*߭ˋoo\~iWo^ݽ/E*0E9Em*JͫٯǻvKəUS=ﯖ+.Λgc܃.dIYS0 ᄏ|Ws`k6sޥ[4WMޛ*,U'*]kce`r|3+-uyОznJ5ذmfAŻWMblߺ- n|x@:#&`C"(r8,ռrҶ|Q fEe]O-o8j[5Up.8w׫..XU\~1y2.>BNt9˱"[L`ILд{(~ȟ`Acla]HKJEO.w6-v,QD?wV&;(jZ煞 HoW'H w  ޜ,H"w + Mdt: Ǐj!4dOƇ#egxY9Gpp& 0U!I{㙎7IiO`Jlm3M$^u iKgJ_*TAO[2'5vF~^Ey?!%p^h|Ɯ\yI)iٴvƢ:}BSMj=* 3y%ekNNZ˃'#Yl^ m0C2Gc2P[\lJp"WLJ(a4FI8GkyC&]rhg5Mru\鎦*8r 8sKם9Gi l5fG\MEhgA %2> )eHR\\i6C+0#WGqR9?\Å3O+Gϴ S?J3t\I5=(H؆lJp]E6˕P"Lr5B Ì䊁lJpC6rŴN ^rh+&z2EW*W Zg#Wk+=(Ô]Q,XG9]1\ \JhJ(qʮ(W<Ʈ;\ .\X J(jr:L;+t6)=$Wc+$v+KMOABe%|=M.|g1'w7ּirwFtwߠ2.,C)7d MHh t4e#2P{96\ȕВ\1V\QȀ2!#"`+ȕ:7t0rv4P6|pÙwELU?J;7d\I:G7v]rl}׳ĬD1:rњٻ*n.y7olB~los ;\o/UY?叵=__zU[tv]%Z׋2bq Q޽JO-nŒe{a^}m{Wukbyeu'dqO?3=)|ۯx*X4ݷ H l]Y|Ϯ`u4oom?#ゅaGa: ~^7Q7Oϙl)1FႶ5򖳧Ϳȫ]K&i^,ʻ{n-Q;YPbe4F;x4UtP|AU$(һc_B4?B 5 ihǫ{n/zWiלZ,WUF%'Plºm*(:rƂ#,Ty2^ǪXXdTEEB1*V( (*]VF׆/AU>>miÅt\/ʆTB\P; uUgSUr ;i-!(6Sb Jb+*&\B E$>ECdl_G.77n9Ku\Y)lk)8r)Xr lLJDZ=V/=Pyf9fJO\7(u]WgtliK겈QP/;vfKVDT(2TPe,&ȦlXm؛tޡtP5Fqw)3#ebCsT9PTh2{W*6VjqDȳu2HSoiuwa%.i]z,TTD(,'baj֓JbN*!T*19`S2BV!T,r5A;Zw%v#D #dlD9ƹY_s7 R*%Z:VdU&%0$ VKB- c\UN|QN*R QҔVPrZj AuMőa,B<{ 8u 8j͂ZsuT1wP%n]jn&Y*n/ 3PrjEf,:ɲRֆ rƮ+Y% qnz grqs_jI3tڒ2nhT9W/|%UC.*/ ı[2b,bU $⪵ YT{u+Ϣ*J'ed`aKжVмA;&ְP2Ŏc,Z%BUs]TqecHD,w(c\O[̊]eI쮚=,+VL%YMK&$nkmȲۦz? , 6lA:&YMmQOVG7~"U=Nu=C d> A2&dP **ӉPuj,K8JH&J >-V(!Ȯ؁pYV ++5 e2a=om0!$( ""*f"73%!^eԭ9+ƒ=c<:L;"q b004KPI!0ά :P%@ `-N zd&{_ bUPP9)taw,9Q[D"=dPIWm (ĩ(Hv\F*^eFT㺏 J H/[.f}Fy}W@B}M6y-d9(-%V er1! aUh#ǻ}h2&236/tV̈KQED$' GcE( / Us7` ]].Wi.OƁj*赫#[ݗ>Hh>DF: xs:pa~t4+&{H\C RbtLECNZ,J茸pΠ:.|OC |Y%H Vjx$#38hGM`Q*YȩՏDC}^jqgE3ʃ d-Š.k Hb"U6}p/~x>wy4ЧB,GSе 2"(QwPҧkC^AJ./s&#Tuo{5t1E `Jɣf !nK)5Y$iNJ@K^Bk=֐h U3ڄ`coA+}E̓BwD"4)YC c0Xd9*I@d(&J+1A2~ȃR!*8vGyPgUE ת7EQeXY"  De#zbl('J s_ ?hV Q[H$Q5$YI$eoQڀJMnU{9?2俣 %0lӨoAKCj)M.=yyۜ,,]{Y7ܦ]N\GI􈨪`֣{H7[`3 .-zL Nd4^UۆfMEѽT}I(ΓD꓇FCkׄ bL#@9%poGO*Vh% xKd]ɡmM1WdsC<܈ܢYf1uA;(XfhTYڂDe{!nkQ7Q1l]` _v=+Bq&SurSp, wHG +U@ &eCE5FHmr3u#; {8yT=&XW (]IڈJ5(Z54)hc#7 sXHY 6)jK=e2 b2f 6#TK.dP?uN^nY%qB_{)D7lajl}5Z!@6(-|VT@9 Z6i }BL iq#fO[ITh5]QzB%aIPrmh⊞4 t @.* 6\4*f. b!C1 Pd/8ItPbc'4\- 5uhD;S`]pB^yح͋jpX|L*ҡ je餀n(N߼~hﻘz9}P2؇~fBkqX/,J9li4ъw--O֗j6_MW:N[M:r3Vӓ+_4B49..WbIgZ͖{.N[O_n;CmMqõWa uK 3G 9A)J#\cڎ !x@h@@럿%PzN P@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; r@HFaT878MW={'PjvH' N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zNUcr!98gʨ @,+b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v\'- zDN Z h@* 7';垻@/ ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@/ ouY~|zNKM骇mqfAoKku^w?)&-5<ڸNǸOP#6>GK/I+mmGDW8, y>KBi= +M7È +XКgDDB2_t8=p]/ DW@+~tE(bzte 䏈  ]cBNWRHW~DtE3"JyP:#EUFlLSh~M(wg@̞Nۻq;ܴ]_t1k&žZ}*o'Њ֜s603R**!O?r70/Wpb2k^1弾Bb?IwB73ս7mu1-ˮ&P+< ' imX{9嬕;7Uyt\)T}džrGy!1NO>٬،?Hg^׿| S|M:dbEG^W%6GŦݛZDY2ӑtY0Qs>&LI)ۈ>Ӛ"6ETBJ}-R;٧Crɱz+K]ޚmvz8-y#uY^`m,1m7Ox ]PF~tVbLˠϺFm4ϝ"3wzJXm^ڋתKWCҍ{te҆̈3(X*PtJ9ec+u>w"%ҕvy?"`hZ;"A=w  +1D#+3wE[u 7"O=1x"V1]@^j)FDW؏G]BW@kzҕƛ1 hJ?"OhtE(SLW_{=; N[W᠅>mr_d>_.fπz[Jl6/co]XT(<߶f7C욤]9vͯʾ3G5%[s ޢkgf[yGCv9z?r1]t,~,g^6]~K?'jC`f䤸(k2Xmrr&'SUz)YL;7`Cv`+á~Yً^x=h~+QywUQ^H^6խabv7}~~Hy> qu͙ަ4_)O:9[,W=TFJJ*JG_l=~S c-D==3ʈP)6c TmΆQ GA^7P@q_.&{ -P{I9J+}j'(vl~wQogwJP1-q}|>m|n=.Wze@7荿_>w}t ⎒_}x6eAwr>M8NwKm^SDWS4w/,_B1Y9!)0(b2^¿J:Ŕ>m6r!tH|$)׉m5iot rVBm2[ɻ6p%wj2hO_ŻC?\#?kN}ݵ⻄_ՑQV'(O\aTXQnG3 tm6mV?Vr[ =eJ4:;RT_IRY?Zw{C?Ϗ 쌯KN>FMͫI?tb6kOh}Tm0y O\4-:[&}*RCsN`{8kSE<$~&*>lj|̦zxux:?H8}zsr-g_k}6)jBvA@Y*bJ%j1m*PlF7O I O˛0fb`b gK.,UhեbPt6 YCԽ _G%M]֋*+mڌr1ʈzzjIaPִZꓑ#N \33 Ϝ^]@]v %_` ܁#n> Èn.jbZON~f~0<׳ix1هorj\2fy?>7-gPߟ׳n쿾ʣ4Szr_%ET svn??M`5`?VRffwzagmo7ê(7ۛ%ַOvu;TOwާˋƹ69~n2eKz}2WIeLQ >q'@\Wh@ܺq VzX!Twq~z<f]0?qyLO{sμ]f$|]RMd30t2~˱އvfb*t4}z|i],6\|DKF4I<ųxx~^rfmAVlF)nOd/0)YܟGwy]\^~*Q?pHiQ9zG}z.1f?#?j)3}V~{VSם ;:Z*|O7ك1$W'LRw:,LBmh~?\ u;A]}Uit eTiQUR ^-+'.jXkh*ˇ)uMxug1j`ޒOye^kW7Έ,3Rp_te=k^٣̃G1kŸ:ZJޫ[F>8dѝeA~nPgvJ|zɎ%􏛮ƽ{C?Eg&qg;ǿVEU|CNs۳7#RH_6?>>qJ~ǣ틪yԺf|˿?RJ+]WT)JY-r-԰EU j_P<&4.LƤ3.̸RfZ'cfJDžIV}>.०qFAhq|O?`"rA)'MçJsrBuY\>Db, iuī r9ܢ[Ԝ]FyD**B@oLthgy?SߢS"/0x|M]׌1i-_3S9b|-~lX3X9]1uŔ;SO36!]0hHFW5hr]1sYWԕ^NHW,&+u*]1-uEZ [yx`pKEWLlbJǮ&+TFBS`*+q,5^)|!mjl@k~aʘ6 / Pv2erb۹\0oolg ڔp¤3ø.7dLя9ʘbCH)]/d:"UF)}ÙFe 2tIUɼgZ3Sd]]  k Uむэ rMhi\EW>@W>jۢKg {*v (uls粮+RyX[TtE&)f]MPW j4X!҉w0ZkbSƖݝuu]0!]>]1OFWDkeA4"jNI)[g\bJ RtEhd+p DAT>jB OJ4B+w@)qd(M4I]*zŠc6J) uץ+A1nTfb2g:I2 \ȑ'S9<{Lߌq^( ֋fL9`}3論Ҡ1]{0Z%cSnBo0hŸdmt)uN2= =ya0vT.AKR#*20@WumKtB"`D2b\`bJYWԕB"ح3 x40\HFWL\bJYWԕFRtLEWL;֜0YWѕBjإ+"]1-uEJ謫 X dtE:ijgJǮ&+~w(d.ήrݠ$A n]tsFwBT_g5w+]$Ua+7&5X+U[vnNFR12%:QR @}-%u ضn%{?J?Z7_ןg?]?_\\ҶttE,.vfĂ]W/{:o]~LjP:1PތkQsͣ:y4o`^}sA3I_2M'qM%eZNX!Mp'zO,qJEWLDbʜ6I]^؄t 0.TtŴƮ+DujtefEoEb a*}9F+8/g)u\A#6جm^J mB"`T2bܑ'Ҏb % JIԐRtEFdtb*bڑrI)uuoe,~3e*"Z%lbJu5E]etE> d+;Lue&+Pt&+uGWL]uZ 銁!qGN ERF6m3@B`%yPn&ܱc펤p@A *@WSN',ļsŨA"[@aũ1o@a<\>r{Ȯͻ9~1Joƴ7cJCISy44M+,JW|*"Z+0v]1eNZmTB"`|!ߑ֏j%Jujt%7,zy$,s !3qYa~uARĵYWҕ@!HEWLSɺŔ+ů}T2b\HFWL!v]"jN ##&+•c8U&v]1edZg]FWX)ךTtŴbS:ȺP6!]1dtŸޥ+)#+p(HHWNFW;vU:v]1cW0>tY =Z!t9v݃W oe܅taYF)neP}WrÀHgșqǞFr&Jَ98LI+{+ tIMg hG R Cz/KFWN$ӎ] Ϻz=R:(CFGUXa(tt-zD][WTtŴFĮ+2jRw.Wd+Uя]1elo+Ep"]uDWLﭦ_+t9z#bP$gIdn2jÜex9lzR M"=KBj? UX#֨Y;(rkĽ 1KEb2CINx!U}3Ef^ p2!]qKEWDE#L jB%Eov+dm2X߅Qdzt7,z}$$İAK? Wa bE.("{Qt-z)G0'&_dtEf)|A!v]3YWSԕRw 銀-dtE`t*"Z7P{u5A]i6!]i dtEN'$Z:v]Vzu5A]-LIW|v+DWL&v]mj  銀dtŸ֥+uaښ۸_a{H7*=dmVTٗu5Wo!)04") fA<)W,԰}׍FsWJ5] *,wW`@8APٷ:wuwkt9F/4He.@m[h.h]'1u- T)(KģA\ 0Բާ CW2$$&F1h ,jpW_Rܕo+CI}ܕPItqWQ 1!ݕ٫}#?jkj}c Xc&L! Ѹ+C.Xܕ+C%8&w (weUѸ+ ޻+M%aDF s<~X}z_Lf|xgegjDBߖwڋJX俕L.?nrBč^+oJV&|)V;\G[ܓF8p~{C<~Jm 2{珙n͓G +7w<.$_Mޚۧ^PdIeߐ7o7>xۍL9[^46,ٱVt=սwqbzi4l'+Yl\x6+)=]RgD*l<#W??r{O43KAD RŕX\﹫b6[덎k @ 26vr5>iެS+n6Ƃ.8l7_kE7Kjs֛cFƣo~O~._cWi53VDrR$LJ -y%\"C_X}7vޏ,A߽6|>lZٴ~k2Knl>si(7zYb~.⋦xXoerc_5 ĉ-r -$ʵ3yܯHzӚz=܅%6hëv7󤑒#%,T:cy¯ CeK/RXp3p^5\ֆfљ+9./#b"I*SDfؘ/h *: +*؆-myaԕs2]Ɍ`ե;m\܉e9HRвjHme9Z_k.r<_l'ajYN 7ۑ۔ò1kE 5qT"_'Lr.|!fhCcu9X1ӸgRpB_"uTGGx-W?[gAw9sR,ì2B5~TÕN4 TJdS%UBRiU18~pcOvvriN~.5_oK0u<5|XL|2O+.;LpG&V)tf7?}/^n{d`xwMC1?VWv=N7~cV?fwqu?כ=L"A~t85!}a±.)Xx7{Q1,DY+eLzYDzqƂY3! E5%Yo%N! 1囜M_h0+u^ZCMձ󑛞Vm7k @lc񖃡=l祛R2Oc0k18f"@Zui`.-nʻf eH2rM3S$40ԕD5)2LTX;%{㶷P{wc^}黱u;?&ѫU2M~*qEUZkja|EwwX9/$>,?6P3!`q´HRP @Ʈ￲_bW=2GOؓhp)p]}aDE*}@!-Q@euǷoR)x3$UzMJ^dH2ɮ8aqۯ$T4ߋ/a=.>&=,3fK61{e{`Ѥ_e2&՚1٠17j|8 vlϪl@F^jʕu⥛0g$;.G\:򺣒h0chXsҠ`Vek>W)QW}QZ`>0lmB&j.,vOE8K;dk!Ajmߴ+iK7Wtu_<7w{_(vq>͊ 'Y ݀f@%M@Ui噲+|8L?9/1 E 6'I~{l[Ͳ@NZh4+K(O)ڟТRR IJTE WB{e-8ټZYZ^XwqZa^OwVL5UiW)9YUTEi.2US΀\(Li!e%JT]6>^lwRM0ClxJ,p*a}|86Ӈ@2Zhu-|qa8^GoxIoĘ]VB8l3QB~騾܃{,dљ Qn^Bl; Y/*t;T[K}j 1rSP%fQ_3^bVq(a֋?m"r/fe()a1i-b.1Dk%6'HJRwJ bRq R]b~K(\Bȃ%F/#ظ"z=LN?pAȖI}0q@3ixIz;eV$T!33 tl=L߉^CTLW"]MVuh贜X/剾~ Aa_ q@)DBB^1"Q؜kMO;=,"X Rϱ]TĒŽ5c diI(?# CߐqP&8 JK;q젮Q^B9h:6ylz1fkOD^z*SsnaQϑc;fgc/L~K$׽czD V@Uˊ4$gTH/H%.1Fq16'zVAz\0#eTv[Pmb 4F4:'hbL!ܿq@":TD4zI;rE'{Y=ɇ/佐Ve5-–R!J7 צ*,CUl>_Lf|^_Z,6IMqWM߿oJo'ZYplJǶw b@[QjDkm!wmxz!hh&qUžtBIba32)88/l>_ Yn//oӈvL$*`D=i o_RjqiQ%tyX?F"8LB./N$B];8 aPhIT3m5'ωҋU…bԘJ>f0(;P?IKȧ՗[M85j̀Rd`"@",n$Yc@e[7.AU"Ju9H*w^pS! xhx)"\Bm,vCǫ0&N@Å,m?1c >6_U0yk!,P: V:Dj8i6!dѬɲ٢|mh3 ZjI?z;O0hD ¿"W$/hO~k.1Y?;R)N&䧖@|`N^M'@;v౦OKsq@dI>ps߻*`J% AXh46¿g7P"ˊbUo0>hՏr aO*yX2]ЦR0ɋ,/A!& ֓Ps+mbdER r9Ijv>:,tkbHu hPz%t*% B4joqu "ɰ5;.& HO". H VN8uMRl9-7!x&B1H>LDKSp  nV$+4-|*[pT ѹ'nS Aa`'ZLNrN5os. cI%><,I33_& yv6 PunYm@EOxm=HweN`afa>%BR\_yH"pd(RUc[g5Z Ny塶~!: '̖U`iIAx+]Ƥ/l~c;.q\GsP'-uҁ㺝wUJnd.C=E׋~tZ5" Av;"~'=}`@ρQ*3)IfT \~'\2ڭh$(w^H LoAm \ Oͷ͏˥#DJBlj@}Jtsp_|PUA$01{rD**0A؁-dZY/〱s-O` vsn{ٻF !^b,MyA:˒lk>Uen:{KF?EP[Q* $-SY]Fo.T:]WGщ%NFM.ŧ1OE%1 s=wYnji|vGp^,* H%T\Q?_MHl}7?J ҏҍd,=Nl;7uF鲁فex:ysf,nLO"{[1uriu=m:z۩ t4nG2.w  0NfJ&%C H2ok~ߚ] h`b2L+!i_Ro!#"/L5&OI"+ :MLCtÝ`X-hjʨ:s2`k ~%΢Di(;`Dj\j*պQ|6kvv7ga;ë9.L:U8 v84-2V'TkZJRX^1XxØ^:e{G'YfVp4\|ZHމ|؎z=v5k g-+phm k**.lePdG |JDp*m&N)]-χ6~U@7}""%U~_6qPMZF9gF*x)JyY .*Rxb~0-fekь8")X|pP ͒es\r3vJn-s]k~bX o'4]AԽ'9f|Ѥl>3.kx:sVkA8V〈d1LCVJְufh8t02F~iF*dwc_}^' FBpڸ 9aI ~I9FpF497T1ҩ Jy޼-"0;1TGa—4/3"A]#~\9q8+;ǚZӲBU6^+4wvߵIRz ,FRl/846x &V4dewtuP)!OiacLc scuΞ>>ɉT2+lk(2a6ttwCr-Ȃ k'm>,n7$ݒ@<ҥCL桕rF.ySa-{W1p&C:qrH9t`Pznlҹy֕ڞ9"gORAYd3mmLGhљ2EhETZSQ~rDSB0~gӧh $5J ݭrTdIr`ruFY\ 5Acq ofIvg+0[ҥs@I]:q%8V?2N P3mH L4EpBfw{ E'ٝ'7"xRei*RK3T FO)` e1*r&xY7ua}bM@ ?0"Cy ]{X\Hg Dʱ93b5 c<+$E9*SiR%pW;n^EPi Tt| *嶘8eƱG#q>@"Fuhߞup2T6) * h*V6i1&HrzWhU,kYp:Y r&#|~3koXsf&R : CBM: %"9*o{ NYU/ Ƃ9SFڋ)TIDBs*]_I|ӯsQJe{d k\p@p7.n!_TX *20.U$-5nY|VgMW?ᆒ@ tAT;u ޷^1ټ/˒`Tz\,,yw-/&G6k_ޚEldmߕmڄY4;P+g ,L]W7֕'BS~~ ?t /vp(/9j38,< η[!`B#Е(>40@}-+{ = Aq z~zW#]J3$KѲWj0=λ8>&]t$3qfzvbC'=E︣O{:Ly?[&U7n4>GBLtsD9]{+v0)G; JGk_(L4N&t!GC nL>-4J^"xK'V-oֱY)c_eʉ G<b gƦdȷٌqz/,6V!(;'wqa r~siDg1YR`+%Ƒ7مpGbIq [_7İ 7;$-A9yhq/R+S[1S-GO# K?`t4$?R#6Fhq#N'ǹ׻Tn"R yFQRS,B̿RZYTeNy fO=K2|J0M)FB9~o4|OS/z VnnXT{U>S[hF*Jf Iω9)*ؓR!S:ϑfNl=E2;2.k*\sd p*i2&I RDwyB`t)@~''f7J]^3k%ȚL 6RD˱vR 0 i\觲>{f+D!DIoDc9sslaθ^1I͢XVe&KS<܉CoXؒQJ3 A!1LSs0W H&{ac&YLٴRt8^[b WǁuY 7iWYrWά4X vcB DԏM|znC:?kY8YLFJ.C(JK@}#w.S:M{=@1׉fMII3^w`BIμuh}]pDfAY:#)ǭHs>WARA'Pt-")4WޯlxKל.ls: ^rb9dcZQ7bj=Re>Iمt4ndw]!% 5;>(B9mhT)90Oi_צWӫkF0mg<:<+21'ڵf٧| L(ݜV@IѸ˅K,͌< ,n@I ֻiP:܌NoN@1js';[n'ZVgT쮘geb>q iLtaͧӥsGa B~,HG!5`_fj;Ыsߝ= #"RҀ]ԁbB();H%w\609;Lєsm/I fAnWe PӚ? 𠳀PL\6Se78]9ܺaghWR+[H*ª\ Ǒ"Xz5|hp& 1!04h1Z&z%zA@d1v6a{-_0Ɣ﹭8Ap{g;%(iE2~Gk, % S f6ҵ;Z6u7Ӊ'FuG7 G+o~|sO," 1ua=rYbvC-ckJ@J4v |FGʍLeH'e<"\=?QIe6 [=hw)\m'y.RDGaV%"a3)_Im8̆ݱ;HäfJ:q7lcDs{i0j=ԗ,?:՜cYl3Thԏa{٫q8 mNV sob$1`R,nvܻBAn~ϧ_o&l>7< '&v7/_lNQCҠ}YoC ΙNq0Y,n/(@ B Ry` bK+٣|?vTeV1GιL@;I=uBhozk1n5 `w];(03`- giWpV&QVe2|2uLՊŐf6C `,nR oTkq[R ]Oe$֢.G5LC*?*:M;d3SxЁ P}nbv'Jf<//B'` cD]hLL!NA,HQ3؅ʹ h:ra͎ؽv;⢞N;\F}}m]뿿7߮jгh4+`pν KƕrIrT!7ǓRLVc4Sm$" ^BۃI&qMS#^<^&I)yJ2Y2/=h͖ 9L)?~T2ќ;TW_S(Q㭲&faDjd% (߱%J=r/FZG1`߸/A)JQ]T[ahL~V<4^^xog̾ՃIXV /NHb3qʡ}XĂ+ [~_?ҥQRg w#17 &Eԫ GѸtդߴ@k gAyeE+pZNtH"jj~>ӛ׮&IBE IxJ(uɸ /,&do;l@'ijK8_׮g'5q/64تu =bn'H"-N6h SDqBO7ǯc}CsM\Mj$"G,A>XCcOZxY;vT(X4mU kZܼYlo>$>btfc*jh .vڽfh C>*n8.^=g Nl1M%4K9\8@;>͠ #5K,|U%C#>[c\[7M^}lzWJJKgs &9{?t# \{,8nFyǐqe>/yUx<QL%OfO~~ϣ?p1&?,rx`#?-'~V%IЫ*dՉR^݃]zqO`͏g8ʖw~VMq,9?ǿ`#\8O80pՋ9!VbitF˷g@O]-:ߟJ.Ke2pҊ agSo= CqG6_>|eG'3MK.ݼ\?CΌ.WovScdzWR 6}6AR Vnpa(̘-L:Hpp/jн#<Ξώ v@ܛ/9rOgWSg@p QFVe, 0N( (f)WbE=v#-?~t_&VIKh1ʰƄyOxCflBBjPWlRF/l%z5.Ra/[1R!C "03!52.Ng_Nߘqbҗk,nSY,|QXnBOfiddQLmIrϔ]ȨPA6 FV\=}Gp\+#s)bm/vj%؄Xrg0Csh`Y}.߾7HR&+"Yhѻ`2D\ b +|ag |>s7V> /~G=kՄ+za2&Sb+v(|ZIyՁ)#u_s{uwwӨC c]qt0ڽŔs^- a8;.zÛZNy*n)-k)$aA V){#;AR΀&/ ZT9¸4S&ѫu9a!aJ`ыImaƣ PP̢\^g1gHJ/r^%USC"ZaO9ĩ\Kf>>C Wk)ĖTb& +嚁 nҺh4]g4҄Gc{JZ'`ӄiBx7ZӛhPp kh߼F3Of5Ch~Y~ AlA>]-<@"gpK32AjRڨOe^])4v`rRhKh,I'$w1!~. },8avh?(;F|6+BGxSCаӉ$U=IÀ%]>+ am҃ r N^|O B#hǧ|h \9G:}ꆳ&<Ѳ  dqN$%lfɨ5䶴Ysa,lt>-_ƶnb½~&Xu mU8H)8e0RgS9?cE90vu:0@"(ͷ-I!J;RE_ڇ;B:DlʔA -xX-TUhk"nmY &PvnMw1RǞ^l54ڢ _Z gv~R$r^Yi/jfxw3R)AnTOoWZҵ%54roM&lxiפ29b'ى$ w5D(nc7]MJ.~UxZ tgb O&K356uGM|)-D:PY@.ޡcuLn^+%]jc6d61yxޒ2R-}Ә>bHDguk/2}៊ V6;LC](}DI%p54V)\:l;/Vc&m.>):|Ȥ$78AJsӍʫ1rj*Gj<E{:kU1>]4HsTi y%))Ge qCww -nTrЗ,Y6y$&|Zk=+C]/R ڄxd 6G34MiY{Z#L?u:I54R"OWH¨U7ՒPhxjyp"6}~I7NfL R?`,B"*W9 ؠeRBxh3MηVj9ﰿGﰼZv?{Et vAMu73xRdGm&[kEsܹ{dgYр9ڱ# Zћn+i/+aڜ1NN".s.X\PkjD鼸*5F # BL"f+<59t]Y.)O:ޟݲ?#,B*:t3!Zi*]Ue-JCH54xހ$l[3)K^c7ŠC:u }2\Q2ߴmƟeao ᄝN-ہ7c&M,|ʹ,F;!0 oᓢvgʻ ZCtLz촫 ՂbQ2 ϐf6bhu { JTޅx& L"ׂA^Y}GQeŠZuֽ"sv:r6H Dyj5bgaxߎ:ݵytBabo)-GRHWk&d[ᩩ/-z?Ig}sI jѕ* c81F0vstOJ;7Om 1/kV/ ̃מX.vf=D"%-e7"XRj=QSqNU$R%=y_)Z_6: ӆōغɈ`1~:q HiKÙ錇1j(+;/n2hΝ0ѧ<@򩐫BT'Ts|hK81]V6:^Xj^GsDiz}r^2=!ɋ7=9V",,$^q/Im$d<͝\Q[F:?_E+gGu*bՈ\c{o` \B&m5C;C]@>u?'bk+r~`ӟE<*<$mRu~ Q4|wFgѣd c VRt8Y4 )-k`oQvDx<]k¡.(&XS,V OYR&Q}ǂ3&N]8 gK?/.Q %@VoYq rnmq\(mR)=G\Q#╬$B{uU;l6#6QxS1/̏++! qN?!&A gh8Kv?%{8o7wcWOQ CșGc]$J1+*%դ}r $ʺm#f BlkSx5#iH|])sۍ_j:`\ՒnD]U`)F*G~d{ڣ:Ģ l\n57Ly P".xqAOwn&#_blȧ N+A)-5ko06sD@".z5$<1eVAv nX5*A~n$9a}qĘF4oBBjP¨׃ 6z<&Rau>Y)T `=q3mE\;۽x.CQ]X`" 'd+~ReJp 1i֥(Q`&˔ @+e.ⴎ)Z_Tm_?;|:_(棽"Lx;!ޕ+F>%EHVWKT4|9OaoamΈ&|Ngw) Rg=AcWx43e: cjڶw^~.TSgnj|FݔIL'(1f1InI?5(9xҰx!S[{qZ%xcQ; s2VPVr2a6=q B)__EPX_24Ӛ+djY ];|$9 Ղb xVj`gJ@y,>Ο9)t[7y!$N1{bNLD$3?* 16Hva՜V|iz{OIKKƤ5)N\(*џenLK?UZ8(I GN }LHeQל.W`!ނh+ I~9F*J](!xb;:<|3m >:% dk)FT>$-ѭmƋosJsiJ{`w*cL+Vʘh OJY NGVv]싾bfVh=ۻ2LF!^>_*zAm)b5f5V+߶zn{R@2m]^h-ƃҘ+U/\\ =Jɖmzr4Vx9^O2QMdW:y!1Z5자8w~t@ sUl Xi3qs$Luӳ7TS ,Ё䔠mH (@LsRTpBt29dEj|QnCPƓg{[/➣FG.;((|?;ޭO;\$d$1\W~~  ^Zوod{a&/RB(wE{7͕(Y_̊V _2O/a0?o/s&#/} 0l::kHf.'ʿ:&UvK!J4w>΃VԟZLa1/9pa,lb0VP0 'M`^@ ld :2xqX6@N_|mjcDvCw m2'$49e1?+& B]ѓ!TxSɊc$>C!syL$`W׉@r#G btzJ݊kBO8x=hRꦧg)DqM{ۃ`gĆG#Huz^"Y"o\(]GUZ+zDc:_."i*' }pA!{r W&zB* ۺĆ,Ax{RO^Զ*(k iףe͖1AiWzfkqY鄵qdvxD^o7! UZcќQvz/ߴLk0Si(hڵ.+r5rnjS y(WܗS˃J N @Sl42nw@kI>Hu?Zws'/f,OHg'eDK䦖U˝m?S,k1de),lm(Koͧd> bcԢfqRD;Mvܮx#@i+*pB;J}ٷYO:M)7LdZ q[)A&VC-H p4*l#<[32h@>綌 9VN} vf #iPԇ'{j2S:d $'m~ Oa|9*2ƣYBlV.,2So1{N-ӭQhNx< بW7-c|hƐ'q`kEaN-?߆ kU p?,TkIv{Pp +]E4). ʺiM:M_~׏ˋۣ-~W뽗{DC.S;t)&!mY?\޵7PvPd6WTH Vd\ "("U'$l9yF,L]M7 yW-c\e:'Lt`\4Ylr-uJQܛɯIBXC;)wws\VnZvB*v[lp>3k#Pca 6 Uc55AcٍdA'HӱcGi SA6=<16-(YVۮP- -Xb7 mrZ2(9w ?GI /1q F{ߝ<[kl/}|.2U/ćcUlE)GF R(_AW28BbH%kڛmۢ"ؠMXwP1v7oDYm1ZsƲc޳_흖e5gg{M6,ޟڛw^*1'z彪:f@2}P0sꕮ*Fڕ}EFÜ3glohn $R0;/|sbP" "l)g*"0)PY>.j \WUj]Xi) qA9'?_Ój)]RjXNr1TLw+U . D*C b-TxsF&VU#e8 BpHWj xvkx1:trw>[۩׷ս4_ t\;E~lElӂTn *8"&E*RMa-Թtg_k臲s[}G"5u*P¨ue |U c/e6\ŦaFa[;yn^[^WZ('Qh>~fw5QMC b%b:jtJPI@|kh+(t&ɹ-if%#ʱ }-'N_%6Te},D>)N.V PD$FʓC+E:%/Zx ۃi@-*hHݱ"B(4Jre#1k~k@b%Ӹ @_?b(#$Jk=@ vs֑;{b%gm;$[T k!]/6R2S,@ DZDDJr2Kݷk1YulZ"?[0Zb9G>,/gm}1JٓŻR6alxe=s 6<g'K~Kjd!<F_yX|BV14/~%->g0=[gZBcWi.Rߟ5Y﵋?_!YhD֪)d)gٻ?ݝ#$Kx*ru,x=a2|@BPSJ<Ϝ8󸜶QO>?׺ER9uЂ19k3-[vb׹)^~ж# 6&QiͯMIP Ut˞ԵYڽ\! gCbs FB^nBGTPSm&gڪܚ1u.|btcV|mR)_/u"Qpآ?;FVG+v+j)v`| ^s%~XRTI.ژY%*Rrf*aZlVVHb%{ʘ}!DzJ H$jrt:ffűj ŋY$r1h4`Zz3¶:3}\bz RI 9(e ;u֦K^g sPIW9OD=L[q$fs{t[ 2[&ӵ-kYw{Fw) vwT0U;RE;MmfHސ<#V 5@2SC0SJisS\>pZ"nwXX{ -@>nz;j l=eTs[&"X5Mk glmo޸muFݤj61ycN??#De8%k[*䋫HlmKoͧVVM+Agaү_ ƬMX*umj4Y'(~ٯ] 1|-DžK)dHr_O+>S\Ƽ$.7NSU˨.gzG'̻Gk "_; ݉YcglUȴ1YO-ŦVN~:`+4\pT\XҭO{v\0*]ZX$jqG]}Z aM)!jRXƮLTQj2BWT()8Uoޗl1BITnzГʱڅ5I'A1W U>躡0ze gpқ) Ȳ u,9 ܵ%c%ɾՔ$` X#{ۡI:zz`]g*KՈ4t;2M +Zd_ZaDAG0eoTͲb-ZИc2'C*;01Y6,Vg rFѢGRvq켲mUdzQu7x[!:=9ᢐ 5+Mt?'X]˜SiAcE`XE!'N.sH))!װ`{?f˙Vj+_ydw{DaH7@h=_r;kֱE}ٶ HUy[*~ڊeYG_eF=rdđfzz  Sc1%jkK m@1ܓ|H;{Il7g h@FGxX,M7WT>m{ 0(|bN뷋;D<}Ë1( 0^_lVI PSv [oĸ^czZU#MUi-*˛KM%Zᑑ/f\ey+ȑ{v6P Jo:g~쬂N#!!nng*9pw؊j#>Zoj\_lFqHqE6] GI wy'\Ky;rQ|5m=}9/5n4R~J,l7d|k1} m5s;33~EP nv#-X2ѡ]hxu^#1+׋o\M6CL_'VcEBC# "w-Y rIv Wz$zShltXiX6S!U|-Ĕ[o͔"MbL(|L~.u=Jv껲l]<Ānf̐L-EUL2ɫЌBFDBPV^b* 0KM5CVQ$KGh}Jo -R Bda\SP:L%*K+$ {$Uat-5 Zum$U$Ș a mjT䩰F;"Nq(GBPssPmمSH[Gi)+I"Vk5ڊ lpD=0*Di~ɾ'gi%`$\ FcFa5o3N78&8NQ&X{_"e:K)0ᔿ_mC @S(lUT9V Vm; k-o5tOEy4 !%UhIƵ=s v_wV]i *P}X,T}.|:X8+VRGE5>Nlɝ,2]K(HqGP]0OSҀ1ƊKȥTr6׋5> l(DbTwV)J4Uŭ$ &&njRźVd-ܬ c(+rꊫfc;=2Jqn|g!4UU\āndHh(^]ÜVeB'ة . *:*Daq>d@q3 UC; z\dn?x2j]]d*Z)UlD@CfM-tςxΝzFT!r$ N]fNY‡~haK)kO+VW +ZrtMoZ6dݔz5C 7"cD^"{vK\vl;Ptɳ) 3;;RCdo܍wR 9r^nh}p#K c–7\YNbY_Nعyb%{).y&tj Pu 8RYr="v&8K&w8Y񍚤h=}t}$ad~@v֤ASuՀk`Il׀K*+ae3bC\UJv|t>2I%$rh;%ô݂ս rޓ^P@|T;a>|(׃FO[ͿzC=wˮ۳Mиoh5|֛qpsmB;g&! $q 4M+.%5z9=?bUŝl]& [c ۂ吝 #} ~KVX¯p𿜾2m?WGqcYRfYg}KxE`afͯCX9AѧQrwG9YTW7ȟ? ^0h+*XJ\т4܊6&C@?PCA³惩_txbPq_ԦOb w_/_>AUK eZP,#P8ťgz;+~\m} -Bw^j|/o_C?_8;|S'ݵBrafaJq<9gsxZ\t招:r\ۗZ^]Sҧ)0"Qk_^z+KW?][ ho˾/,~@]+޾Wi&܋s?zR@:b5cnG afe>'C8;~z*k>8kTY3g}9s!Z NbsJJ?u;MjL ɶ+^EsDQ17t1rlcχ`1Gt0Hi@Qqʫ5[4.P>6&Eb_`Wxܘu a8g(2a1x1usfx̾9$r{C^-*OQG5q2|9 hù1G?xӨKD舑iRA^o=ԣaρVQblǽ {~ۯ- eP6v %4*2No>M7WqpvO;uiJ8qh'2e$g>b?@IㄱM"hrt7q~vNSYn7ɀ4؎ݽu(gɴ :' N:Jss'& :m/'}6\1VJHND_',qi."W<@J5G^U2Ht. h\j `Cn^JSѸKB5ҟ}4H0&Q`4/WythNsi.ְ+Q8!x QxG ʜ\c7 Ehn!R!CbTA]Z?'&.z|QNmW#<5av3}2|noh 8Xp*f%A,H[\%T\ǔl+E) R?4)ggr(CrMG\;ȃ[#$#^pfJi8|EFR0VqBN37`qi;~(||10@)6]R$2Lm(ة"M0#%on@$j T  >%V!1Vȭa.+6(}3h!d)bsLQNWX׻t #ˍD< EnpI#9W4Lp㻪+8/nmUbzdno?2~;¶f,?#8~T+dڮ10ցpɗwǧïOJP#b+a`7&ۘ[ Yӝ VC(Y,I( 3M-b (:[4iBdpJô+>U*}bGӢ,ΙQH3'oZ0PC(kWH^u/AE [ђvƂsuf5wvmdcmn 4z+"k6ŽxA;#(+oshVrBP>nc05Ց#ٿR2ӿ`7&mc9=mx{}S4˥u EUJ:yRW&aZ\mq̺fٗwS m |SfN9%n攸ÜU"ycϣSv]9//< 쑛\AÈ;Hs4Gjɪ|:#>5ӊsb7m9UԽOzK&(8&nG-Nٌ(*ԕf}̆^ `nd36:PMp{/Q7Y7ObݻK@f]0:[͉8Ү2PV3GZ}F޲Q g0mTS5t lm /GEc)1JFY PP[4`d|,EX{Ѳ*zbҧ92uʵw>Ak-uKsi{[oͼˢʩ_ $겶:mN(/+ݮZǕn{}GGkƵ,Ƒc7cɻ6rrK6%/2aIHZ,MQ}g1#tM )!vU1lNhagVZ>9`;lmτ86B"Ơ4_x<oJl Vʮ_u'Lc(PKֱJ;rC@D̈,W)!uI bJro,U[ȞϬ$ #xIxBp}7k*ՁџGDnT:f6GJgK(37#NnH{2-d+Pqą&iq|n71['/)|KEٙTWB-5zAV-k!G#hC/ۈ9 Z4@g_<(Rs}k/zU1loM"û=&uIwt_xLYI/Ɵg4#o?ğ̬KFg͕!ofhG^P74>%QWgmBv>|9rmLEߛe_@]Ii5ڛΆ@16^#}l!dY]!%*XAN{= 5?5؄%/*Ɨre}WC.SJBCpuרSLCމe JʪcԒMOcsT'ZƨęG-eiӁVزO<=InWOjyq.4@ yپa0d1g}[}@L3 pyL Kg4v)veȅ{EEG?/\b~7:>67O.~I|f|-Nqvϋ*_ޯǟLOa?H{QJWΞ.+խ0y;%/aLRD#> tO55[o A=ii8TV*;>j* ]h5?^AIFq% yO򥖓dn Y\i0fǟq6tld” o:i0ψg"׊CWf MNPAy㞤$iŹ'i#1[pq iӞozZ/22Dyn_>g\ :zpEusᔱG%Rt`HذJJZuHӵv۰!g>OFFcH[2.0xʌe߅'eA_CȲ B-j@,lߣM}`3'f me ȆBˀ1eFQP^&ڴSН7Q?wc_fɄL0Q IDyv} }nd2 8Ob#"[lwbmw>=E;j\]b/F[opxm"?_+}=w7-As߇W،ޓ|9N؛યaeOMб*ݯO苡UL$F* P1bHU̗%4JٛsٔS)(8VC2dY3Pn &$"QK)7~'/}3PfDVg!(~qqVd$0Q\ f\a$0$K#yW5EӪhіIΛL{4g>u/]x~p[#/֋&xYygիyүXv8 Ehzu;G{`SlC(A̸tKI̧tҼo' \T_Yz$7SAˁa?wxB5mozZ*SӐ`,U g4O8Me̸r[6NI_g.96jtX|G赕i}*< U3==h*l|^Ilۇ_,Dl8l>WNc#{!gW s=5tF*_mVcSg2:ru(`dsS,ˋl5\ FǰmPG C gU1FlAWe[Yr Jm .]ǞڧWO>$F8wt8zv"۲00' UOvt<@sRiG!Q r>5B˖s9;%}cJg;B.y9To l Tʐ VsBh2;'āR>+uoiR׷r^.Xt麧kﳷsʉĭkǕPT{hI\S1RiAM u eZ)ej9-R&e2*RM J=T դPCwߘa4o1@kz{"Ȱ>sd^ şF6ޓ̣Y-9s?WRx~ߥbң6[?\3v>\Lp[ث ^LX7BNK Gw|W3lOKj> 5u?=O7}0;0,5ήqn{PydžcxMg\¼y^u8w@4vqvީ5fdV s17TީN S;fK(.L ;s ^M@_Nvjo}ة3v61|"76y6LܩNn<v63lv`N*g Dav;[ƷV;{.}ɬhO"y{N;۩Ї2dFw!FqS#{7~"ɞe#u_VLhWG&߻:zku10g/:G)~fg!o|_9|3k0Q# dS[MavϝPWd>1BF0OgJ~v;ŗ# #VM2 c C*Q-Uj!>0⌧ OY~_2&dTNTۇ#wgukq%<473 %v .KhjQ tiMj0Һ*GMJJ=.C)1ͺlK}) [y&ٛ_-_w6~I! fDw7Tkf aT)b#ĦUش$ȶ AL9غbiBiH/…WQIz+j9DYc_1FdaƧAٍ~r=sa[L\ǿض{\99%bs%/2Ӧ HodT#_g1g6 J%5#ԵMNm-ζ6;~6g~̱/w,f` 9;MKF6?u 26C$rknaC|&߸1ːjTu cFLpc"\^tUu1"6v dmepz&fkyut|4N}!x5g_='n5pg/G06jd^e dלB9PaUr !w:qfb64Sb5bgB[`l֖&E9 9dUM ;#&FQ bboV?%6XjT&B-8F[H}y4f"y$icdf6:e4ѷꬉKԍ3'aƓC hi/.:{U#)m3aѹ;e#nxX/nae>~!՞‰DZPY[uTYR@=r43IF| 1vaóqzh\} k1ť9e4ٻFrmdk|\fn70۱d&ߏ%rղ[jG'U"Y,ixXvQM8B$sR$)P RTgEckatxIB[J-aFV0匩W1:DeRDE";!x@Dj &2V# ʗxΤZ}$6 5p捤ȭkpBܗ@D=]<Ҍ~mAȖgj*]]AR-iu1gy4_5 M A?&G5o8onN-ltR5$^6ڠ=d3"")4JyFmJ,d`kb"5rmh`.NWbR-T (N01!t$BFSvgQHyIlB㬤,G90bbT6F}(ÕT+N#ѫXA[x"Q"c'c$ 7oٙY&63YSN$z"ؓikޖ=V`FDHhXwR$SEpR`4!];L#CI3P \@F%݀ZOR-!Ve[lXXǬDA:Z$s*Jyk-f(Kb._EU&Rvwʮ Z5ΊAb_PD8RvS} qfcDY},!S>wh& ؛?ỉ!3N4*Ψ4U#tu "ZcCqLK)sfXpX؃GRܯĕgEDeIGi{f_2io;'M Lkm&21y Il&tu}F"'4ZQF4_]L=|h"KwKgӳ~=/_mM!4Ѧ||oz,e^CIkz{zncvZX, L%@3i{MR'ۅfu0-LJqkO l&e0YZ} SubT'xfv-1R58BTn L**P`12OƲ;Fc\nʖj K1X CᘦU {0{H\b o|\C^ η[ &uIgrޤ*o8zUFWuZy`k9҈N tipZuC,ҝ3eeכc^ ̦$V"T5zaw[ev1XXy1qHBY7u<{Kgڮ S5Aq/E;]q* FQ G~E88 eK &'<0-^MYc'9QвYjvvQs5Zjk%@;4$ڽ"h~cXIBM"F1WPVCγR S}NXse[忶hMǰzo?uo>8LgPX@h-7-[ Fjjfud|wa(8W-0/0$Y[+T[w8Z[8BNؙG%p Hhr5#(5lIjwQ^x4'a!/Glɪ([AvCȖgkX{Zc?U:{o9ȟ宓ksv 4L#9hvEYkYTP#t*P:uOrp fb"쒐-\.l!vgY4v?-P)Upoy7dqGBM> -RmuikRjHz8Z|q` OuZpJ R/~C@|(Gtp=bvm=b,ݨCaɒF 7{(zE9ERpݵփ;ovg냿ȿƃ_Qrɐ|6n]|hz%GъkE#H N$(+mŎqr4ݧ׬өگe1mmfVY)%#ϰ`Ʊ?1?ӏِ=a/aĊI_uUq:mɔ+wqN4_C#^ESe)OżC̔'~䫃d GMC2Kfݡp DԺ}!TQ$]K2rVR;g8ZF!\<󟙍k(^1۪-1C4GRwH(1})bp?nu Y(]ln38sNoɷ{g~I?3_x-oή'?-?#S@|*`;TR(ЁQCi7IFmx]sm=%fsײwfo~2NŰύ\ܸϋl͘by^ ֙/fa(;}3PFnQ|>|UBqfyNHL^>j߷_LMOZDvΚ,k9X燭9yu٣yPkak:?;7QPHԮk7.suq]xn `^jbUA fL>/p}.L*&Sd*L㒩pTX< ^+4t =E@ǿ# LD)騕 U 厪F-y;;{#fGx^}Ѭ.t#rBFyj,} SoC=ξ z٪oA/|"o/r\xO=ݩm0Hi[n,VQ;^z˷V \{?J)d~{0g|u/Ƴ)Oۚ%k\sZ^V*~1R<ϔﶈk6#$Dž1ц~}S\)4M2/wr'=:X#3Ԕw}iQV4lV?qe!t*r9CP*T呋{-D%u l>쨆l_ϽAMkLHVh;tCxz־/z3kMvhpi%`f j;4K؊v4Tb~F}0}gо2?m#wÇA'ucTmdv}6fq#V13Ĺv=й=*F<{;m9}8;XBlmy[лGB=&nfHT(R96W͊7ug8rB$:P׍WDhmZ)뉻RAW[};nMnOXۗO]jviU(%RIWIwr2jQ!O *YlU&t?JKھzV޵uc"gf;s1@N1@e [ >loɒ-jb R$EZ\?+Ŷ9l %hǙ!  DݥH]@3 eQG ^c[[S-L>l;fn-YVCQM$J# gt"J(Uaf(o%SW`J_YQŃlƻۤ%c< +-EdBKp؃^: xQq 3,}~ w{/ECZ&gQA;B(F34ZB$rt@ d&xQg2pt6\wk@")Lr G -LnS uxRf^E2ux x"-g|3Ixsrp7dͷ4B4|b87^ys|{m.^|5893dkB,2ܼ*yëG;=(8fR2̹m#: M^_=Y]87~g݋Pgnj!?''&{gdߜLj-S xt(WRLDiIS$zܑ+*08>vj}6ZᚈfԐ#Re9P2r|PbCU__Wl.E L 2ŹGeMjwNEnȅ h?"-M\s[Uc:>UҤ*ň-oכT HdA.M>4a+"TlHHծYM.vHn4q)u1*+|ݾw Q`{Uϗ-H?n4RtVuXm^8_Ō;A4a htХ>Qf ¥hEkr3"J!H c\wЭn+fNrۮJ`z;o}ͱ*vF")+ì什P2l++3h'&uPO#ZB.;q2E'v*"1j\bI9+@)IW>Y S\++ VY(=SZ K!B '(pRJjRz(lUe*\o vVW ^b8uR;QJ$i˹c>ĄrqHSшݞNeqK\t:iCiخsLEq#xruC p b>F^ @=m/9Qfno LRW[4.;Z%LHZ&PsB+W*Jt4qN܅5F.w[9{4VY$^b6_ hphp*a_,$8<{D+N${@xj$EΕ!*4(={OH*,ܕDHÓ2W5[-]Vڡ]Ҡ}GAG\qS TebTЄ8.N[ي6"qwģbs&¤igM vZ4POAI9fPJGF^1]SV,Z *Yc=Cy\{a%'RΘ\\j.hbK.!(1_IOAXbKBr;E^HF[ 1B&,\St)%q&OfaiNMeu ̯/G:O `y44}G44_1'of){LOR.5UAËxu6S÷1~ss-]Zktq¥X@WK1Jl0t9XF}y?\bbFSMKseg;7u6Cnт Faz޿7 |k}}vmgyp 1UnIjݬ_榫kl˝20[+l J [I8f p pM!U\>4=_jR;ʛP#$tCգ&ԑ2|bgE]tg:1.Bl}_x2EВ0 }j8WQyn2oLdp}MUbF 7wY-w0%kYaJ\Ϋy%SrJk׃pfj0̸gYȧr盜d4U##kl(QGoVqCC=hE/>7shC!p\nG0pFR3}fZJ3y Dg Sd `ЖKp;URu:ր})|?\]FFјW5i%Z($S`3IcW 䖬DDa`Om%-)7ݍm}9uX/sw{|g %0̖0[ö%')]!z."\#6E⌶V)- ځǵA>)FӲFӟ|+퀪ҎNs8v`\3<71uZI648D9l\?"m{p}ao_ZېɤfɄ #˛yrݿŐ$׶3FK#OP/FDB f"G^\Lmx(NIf i0$"dJi.%c%ɰvAJ֦a!в؃"=yIwԸ,r ܨ5]\Ҿa-&I9-Tʠ _hN Tkwxu;͖IA\d5@m_߲XYV"b9%sg193_!PCרqy\|9gnLTesml#Mf7Rtϝj0v>:@5(j.5o*pγhSjkUʇh %#S{Xf3_}yh+j[<>HcdI?Gi\H 33ٹh)\9% )6 ٣e2=] lCjmv|ޔCYF_^A9DRd.Hw $TdsL,;< HWV2HTd q U;d)"K޽a*BharnkCH 8 JaJܾ_Qwav ;g9ö;ݾ<41|Jԥ|!<&ØjfhXm@U j_LJ#ZA?nţVH >Pb6Ƕ OaJ[SזTe4~l˰ vR01_%XOYIg.r :JN$7d2DQ$pM $eb:YOdu@Y>2J{.ʿ*DQYW"DnqY8KKP3s >Hf> @A:8'R`(c)EnYԄuiNA>5OJdIgIPHH}NJxeQ栔x\허.!$T%|=RB lCLn"gQpIj(9B&uDƯMeB"ǃe>5[<6ғ(JfFj0 jwGuqz>88-Ӄy&et1>?ox?}zoft᭽B0ִ{F?}{lvJѾ@ڌ y=ιǔ~r އA'hn2 ۃ7RreFB>4̽+ i̙)%A/jд,ma ^0׿vrd-YCŔ).'G^ZNkPDU )w1* azz&WBRLlAQ\]{ad5P(eˁNR@!.`X'؁]Vdf](̨4 ϣ^5wN5B|,`ӟ~{eX m"A0ǫ'j)N es:h4?ৃ;=|`'LY=OLVZjO4OҭVhמS`㄀xZS)% q9.޵q,B@YuJHXKN#r^!U$*ʊ俧z̒=;$-Lr9ս*-.-WlHcH LQuAǴG9uF-{S]HmBf)<"r)!,} :ܿ<1KNm(tCBm<8@g5!@8M`w踏jzx\&>ENxֶ<CEs̊Qc6ͨŝvBB0cWkfPS(uѿe~~5bˎmp"W\XRps3I Ok[_Z$%Ea ,6˂Ibf0)(Pgv9av%F7B˧^9mz #bgH΁Tqʒ Kc.6xX3BEE2"T $5e HcyQ0ZAx pľcuF  *^1t!oj )߈|6S"9PTrmNIA= &PAd 8e,HʠuQ볓 p 1#J5Yvk,;-btc4]r)t9z$Ql3*+Y׊ Gp(1m ]$sdy h5g1KB:$ $4bYH,٩D֤>H 3GH>jG %wͯ. xDoKx t2i/s;~l=gtεo{=PR. m^` emm#vBS5=!5\@hͤdȘqՎq/ǡG'\"+SI:^A xI$ 9a|(hW*DGukEqt0(uI]8y5Cq%v=r#HĎm.9Н],uM$0Ib"&FŲQϐ8"#2RӰAr IWCMX|l1L 矄8\4:a3 9Ifh [->%&Ϊd:m<1J7ب :+'N7(dH)*1v/-2DD4[x,z6pD6f h4#HIp1 SC Q<<@jߺ)LtlwbP?DFѩJ(=›(cogZۃ>7Dڒ3F*o }d3qHqGE(zX/_Yゔ/.ݏ Gb}wꂰq sߍ>vgZ|,|vQJ_7m{ f`H*t=fdTvVN@F(uؙ$c` HPjXm PЖ*ԬmjӬ B2 psV A9=U6% ؙ4Ys(B. +Yr%M4&'9&d1d }*4Etzx7}+gN(!f~vQ_" )tNmrQ34k*2hpD Y~+=vc6,Na+y&[0MU C7 ޿0*ƺʼn3"Ze,p.kFCe)l虔$ѧ(9\I(8j{Q9  1FV9w|!d>{]d3=ƖMzF ! ffz;eOq-`g"j 8{S' y&Ge]?ʃni`!es:&<8b+ G |w~!I!D؀_i(z_e1TBgBmhV'3:QiiR["x8h">Ln~s m^cM- <. BuBA(:]R?4N!uB9 <ݮ/gfP((1F|e pIjǠ(Q0Vec-9YǙLbOWjoS0ɆC5QJ&R<Kl>c!D]pI*IņdOEcPT)M7ɨ h2z4z^Pd8IJi$ E!&$Gzl(HcElYjM!6V[sm3:9$>aZ9'@XPg!ʁg凞,*J><ԩ}Π%e9yIZTwm3Fγ5^;ܵ_#ݭ,iJOtqB)qT;:hvI56pW *H:Yg2Cr@oYzڞ.t PyvLn+y$ I$fS!|Ǯ;(2f^ueB~q۞grՊΈv(g[O˓oރEgB;pcRag0j$"/ZGӈsǺTzwuP/ԋEezX|GA@zmA=j `=O/ ny- K(#vln)~yା<YCjkuorq-\ƅoLf&+4Ǔķ{\}\8r!zqtg 刵Kc:cBxd (iiL"ق{"v-`p k˫W#'bDzsZ8?yw}?fц/~l >˳ZH{qPKeI#ƽ4`-o:c4?ʷ hb;=# pʥ^U({hpG^OLPbw3 I؂vQ+O/zkObYxvjnx'bE=}g>_OV?7`ˋy,^R&wslHҽoo\emzW.ui<'J@+F~'A# $I]G I.-KAzetf8# vZך_D0j[p v :^о\Q?Y!7fFWN3S7ޖz۶豶6H+jɑI[$r2R٢"ZF~K֒[)y($eX 1@ I1[*[!1ڀŅhxVM'J :ٺ:vg|LvWy=qm@ FҡhiX+L_)Ɍ$ c6ڜĒ\99 㞊{tSA%lI6ٝV# Ư pS2?d>3BesH<&P}ͦ7v0Өh '6*ݭtsb6J{Mz{|z{p=Mxy}ot4SFf4ʽt[Ihw^ Qu5Wډ}\~2*Wg~#@7 G5i+k}bzԹaTQ {yƮ0ҏWw>,9hV|RWvf8UZj &))[Ļ_V+mO V wS_֊qU[>jxCVm@c?К]~%]PF!)Q+1T"-m)}L^"#/KmViQb_vXoNL>A8cE(#a+4OXgP\]N'@WnOc6IgW Ú43jgŮ,o 5(V.%,rô}]Yl/KgWcܰػb*,nŮ-^_t>DTtp,PPYuh@hBR,})ˋETZKL}˳73-4[tocqx׼ZprvF|oer v̲ɣ,-i~HXCL~f,@#qN~4YNеzMwɓA_G;;juyIT5hAJh8W:: tC(LF_RVE>DBnv}؎xe [.u]Nu\V9<[. RHnH3V Ͱ]L#n _cY()X%%΀u . zG+m1p{mw3\-5n=Kt>ל?_uڮk6uuiˍȽQdVm2EhvR4Oszi|`H ;RC[6ޫ4@ڪc+ɖ/Xm@xڹ\~.; >SQ±K`YԱi2gښ۸_a [['9q%}JʅXP"I9l`H#^$`3lj8ZZ:MV1IuD%ӊUi&Z5xv7!5.[ҞjҎjJeJ$\`eeW\9\:VV1JJ/aB;%#L"p5wm.~4yIEILUZW2*J'iU">jU:]{E+yݞʵ dmVe[Og=pQ-Lg[QbB UR[i%>^F)Ul8gF X,ҮI%OSpc4 @$nJ~KLi4Rl$Dp̈́#kW1|s|z%,qbGwUeL۲H`TbrƯ1/ʰte:S !)a'vH2H?Zr4rC$ :X[8[ʂX qAAI8RG )7sknѐXN%Fs ?BPw#3{'cGtbhAQo D|MUw}饘Tr]~MsMVH#$(LH!ądcPz`~^s_<&!J Wh3j {يi'6ja(X={ iks@aL^UKKS!n1/߃u>87jv ..?l/C_۱Onqç&pP )%YȼH &X[rVc*zVCY)1TۡFͿq>> \k0sϦ&b_>j!Oɽ].]çvvCw!Z؀JΓV&*zCԯ*-K0`e735H]^lt}ԏdUF!nzCH7/UueU8u \ӫh/g,&:-quF+ 6N5n(i1{}ɣQ{l0H!M/_Jfpp HAӖbRnH-QZr* U8j.%6lp޸;(~tiG J͂d$L_0!Hleh"7#19^Q DaG3{ӰI^euao>[̈s3#Y$P ChJ CtJoFdPp4  M ni2dL+lz QV Tk__+a^Tt T#9hGu)4xrJH"^3 š aVHa c  Fkx~I yMepCm@/%A/*MP:p'}O}c~jj>)ɶmd{7)tb?\^],fR4к4nenf'um-9/^wۍ $RPz^nnU}zD*H`իԮqYdoÿwȇe&jp郥*JhFBѭkP6׍|*SHчr֍BI`Re:c4nbYdFc[hNG;nB"vKAꤎĺ gT.ۺ%3kݺА\Etv-*jg>{G/}"$` .$U13u&.d1ڑ. U4Ij[qdk)t|%ޚq  P8[_ʔ!wXi{{-J@5rhՂU+iRJ|` a6+YxeӔR(2jLpXjj^ Ƹ 8_zEBCrM)'9n^9Q V߉c4 .:|- 4+W>.=u?4l𕟂)!]8pA8օ`%)E2']iYm"ToLGsu?āǨfKCNBP!AVZ-cA.SњIGapb G~ib}ɷfQYl_o}!+c81}?Ƙw1=~؏y[ܨ%# G(Zhom}^ܲ7Z1sNXK;g|3873|͈8/mo/<dtuind((ǘ8FmUD- _htց|*S\H>r;j/?*-{{mS 08q$zT۷ ƈ\S|Tv=ٴvY}Q&-یX{Y l۳`;8[IpQ/*gl61Lބoq7S5_XuF$Hu)N2pql#aq[ o*BS'q@jۏr60Y&yAo1ſo,69'I`GU=r>_w!zPͯniʏ$q@,c,B $$@y4Gi qN$.zO~i.%wrA0j̠JKRӄI1\,} h'+NKeg68M 3\o ozyZB}\>s)A0{n^G՞`}'xG?[N!v9|d=u!J\ݪe:AV(tA be;IjUEqQyGPCee_kzs ݣCT@%$z`TCuh !_,k}'V_b>;i!6Cۡ^, oR8=G@z3tn5a1^,~rzyb̗ 㣊NṯW/z0֋Agf P/M(/F+IGv (F u|j}dzĉD?Jueẁ73u#w|ı}p.Bh@}N/cp;y ki01oyAA%?|:` CtiB,Cb R|kP$&=r>W xOg/b?GiqlϮ@;J܃^,DUs|!zd_@:CӞ;'t4 ƈ#k0R\'phb'P)P*/MJ1PR, ZJD $T j $r>xU/$JĎk(%@O^g_y-ŶE& ^BwZWn!ЙMBu՟`S}zb69]+_zdE)ޡwo?^Ƌ6穼Yb@ŃM~ǛI9] n6Յ-,?Կ3S_uw[pw{B@)d_r1M7wRA$>qL1ktTiNԘHH2PK[Y\>TS=!|ޮҫch)Q[*w3-o}-~u9GEWz>z۩w&lLѽ)tvtzq[z^R\^;5[_|ڃfYL?:P|䇿nż,~J2߉wӡ}^+Y6ī{"{ j% Oz,VeٲK/%>B߃ 8aSyB}.+К.*F P!9^yfh㟊!R aEZ^$WI$ǍJX Ͱ8J>dgF4Nq=q!ɹ:+pQqA>ʘJb:Q1@T5sW3Y)274W\ْzxW]?CW3]5{9ƇG9ɝIG?Eh|ݯXFý:,BJQn&jc;S}P-u8 1_X30v90HM]O1O@[ ̟ }ݍ!f|F6dd`" g< ̢4 P3ʘIKlN'},GH Q5FTZ- )̱Z`$0uM^I۾%mE]# L!援eFpMO 1-;UKSpv"}-,=̀ VMUWh,#Ȉ̀Seny4^ZMG톍RO &YaßO-*ϡ4M'GNoƇi|h*E=:$iMsk6UKW͝_IZℜWKiuci%);;Щ-Vz҄{%5)(Y9Ɠ/e]Qζj\v,]SwnAPykҽDH(\R2I,lrsRfO6ڐFmϐm?&pSL7HePe]RDc8f2tf_[ϖt.ͪ`=d_0@J{3lvdf {Cwj;4?G4 jkjxyʛ4$hp,Z|#ES޸bUCIGy6~'"ajkyڳ$TWD늲ϚPG+CPKKtU[ BE Xy,jPMMܸpt͎Sh \6S҈Y&z RHV"_ܢq'#CP*-O.چH*5/̫Hܚlc?O3?oKτ#{0yձ:V4BЧYQn EEN 7OWjQuR0FSJ(r/obi33+kƌT^oW_#h1޿G1Ϳ>-?+9(䃟[yx?2/e'΅靷WMӯkov LbL.`OLo7_t&};Mք^TGC@DD`TD2+Tfj%1G3u^8 #p{{wsqmK\0-;ܷ|ðDLlQbtVqs>2&hVq̞agZ^48q e,<4qXƓ8 @{w Y3Q ^Ok՗*z5Dd ވcCx5<ǂa@JzCDs4Pi  yRE񇢒.Ho6]}[ߚ} 6$hR[^fF3QR{.^rŞuj_:rBK4$>g?LM3Kq׊YORWdsjCX.N׼^zZ@KV!N` {)*1,ю3 `0kQi mkCHa,kFiXg>$tdNl-f!Mc^P11mLpJ, inUy4Og6 h &ØHY[Ԓ..S<(A ??/-.1^n*ݒ% *ИTy ;% Ϥ.YSJ4\N)[Ӕ)UT"WL;a <* s%hd1Q`K!np8~}Nv O> R]`%D+;L+;E(; RuvFn0;n;JWK!\o9}{G.Z íz,Wz^R[%U!^i0+*`> J|7~s= A߼ݾX~'[LxV l=UPW_~{?0]f? ݯftO& {O[ߩ'gQ/-l߷cfcNOOO̱7E?-o( !Hݍ6xnϝrxl,/0ht,!6b'u3跫PA~n?eU);Uѧ#D# H-)f(A!Lp3$WL ߁<>v~|ߔ{MI^r1GhnoLSf+[Τt!]i+OL.tA}j~PU"U"vZ⁓gy"hS6*ZydjZyYTEeZѝ%#J&pdtO4 )b/(ĕ@,ʀ׃N p"QPc͐C>P84Ǖ2HM9TO|͟z (3u8.hkB%'Xk KRA7c!/DlJgq6؍/>#bPtRQΘݒnMX 701&qNZnT2wbř9[٭ y&bS&)nT1wa3ѥfdA[M4ɦh}s(fȻbPtRQΘr$fdA[M4ɦoZ&R<}FvKŠ餾s}2OnɂVe&,䅛6UXEk,2Ҥ]3Zc,-~g4wD^S]o<l'.kVH0eq;ׅ=9qpCE\^WlΙSPALxGd}y(k[Q>a`hf. mʁ8fJl/QnKw:*hV,.N"+)R%2f zegDwIYڦGpp:Sy0ea7"Yd|81.ZO|N*\n}V.r}.QO )@U$caPk` Vg:38OK#`!܊4uۙ1'O"cRG(Um>[=%]ҪFҧ}3>Gj(E7ٛG}N.{k z$SɜB"25BJ B)5H89.8I j3nVi29ɸx]X&)w~>' 3՜+MGRg5&FI5myR_{pEIdfΡQWW݄JHu ^Дg_Сg\ ig=Hd;q;h Uk',& Xcc!b5*ӆOgZt.-Dw´Id@4,u2ӈ83 p :r, >p\YP(LF& TBBJcr S2ɂW0 Is֒!d% ,Sx(0S`)SbQ 0(YPwlP}y3h fWsamK#('H$8AH4Dv80 6QbУq,CCqҍLOK72N)9P"Uqצ̩}]eM-G+jj{I^+wM[_E*Sn^43^lkӲ;$*)nJ,V]\腩2z(a A'ف2h^}T_EkLƖ:P@mdmUeFjy|o^}MWSpuӘ4|U05܉1F+_b=W5^K 2-[L<H@;hTFjk:j"ޟ!b 7$sj~[n^1玤f+Qa+ɀдs[\nE6$ItXm!yYPɉtȏЏzMLF П A$h|Bu2>oV*2B0_7J>iՕr%Vvl2P5$6JX'n99GFNNjڲab7J;!] s&;_d3|&( 9!KzfE\0@_gZ0E%]x]ZHzm9H Ť=KR,(µX•VYI8Jݩv1)Rwy?q7Stim,X/$bǟnj'BRȹN={W~rt['q2UCʯgaJ3>$Ջ,Y07.*p؏E[Fd 3IvC7J8$@ @[@6T2oV;) svj^ٳ'A'pW3neyT (9T;%NTds[ )T3$XYC"5@L8c/0PN4F=fR: xFb ze<ߔRs"ț=Y?-c#  5Ͱ/IΝ6KvAAM߉JiQjXMbX<&CD~;u7%p2G"(ZyUt*ad?ۏ? *>ڭfԱ#7Vs`?dҥb~'i,Rpo}K OY[\[@X^nײt]Dns\?2w_L]Uݪ[dGtVH DIJR@Fz$1Ni07:vl(i.r)5wU?jrF,#'x"I$0hlr(uaֽ5OMlOJ0 /ǟB%%@"n>h HhmͲ.!$nW U)Oy75/ A'i9u@.pτ![ۥ1!H &Э5ޖD\NmTn=轁3#(XrtrrI"|X 'FSj#JJp5lQ8 dRჭNH=$-ΕM4J{AA1,y6^ HW,/5/RNZ{ɆhɊ* @`W)3=|Ps>pܺN @tETvKx/I &̟&T@4'z$fh Wr2rӷ8T@> qqm>R5rnM$&eQL&ܟ:ƚc0:7IUͿ _¬jHkT;t̢5 u$@OCG]@ ."?L8:u7,lQWv @4{=u jEN s骚},1u:ӣcWhAl,c#<.c}y.~W-k@( xƞEO '2Ѓ(GxhԈ #t*p AlNu[fu- $lzOxܞgA)ߥ;9,~Lr 2 .%䲡6ZqI&؜|w,⹬{<)Xo)Zm?Uqt!SFeLG1ߏ=zrd#>})=z",3 }.y^̇.1GtNK!pQi9G5S;͏ U_%BC:G8oi'C9l\O5 W iYK/(')57 [{Y`tJtCNA_ˣnOAxq߂|nf?e @A^VxkF[EYv*ڴZf#y(Sı@tZv}F .1&RGve#a;jlvB bv ^fOGYYC\sxkۋԮltns֨w^nue#f͞ބ-ٿ%LDmiZ5笞L{/j NkOPR &I+P'Hl;D]JїH"\~(5(,"w;4&EciU:V6f,H JϙJ0:aRIB$ww&WB'ǠuL|Oaql$mx}=j-`z@Gsr$@=$ʲ4OIE7?_Ҭ4ZHF&:t^*09tڟ*|9W~H yU7y cIYKz^bl,"̕]נ|uhUwuYkV́H9gE*\ӯ}c2a4Vf7G߽pU#/6~/"yy1x},gyQ>: ?D'o+SMϗ7i5&cl5C9ח-Qn@$jWgVsg r  $İW|kG+P)MWfFoLc3m/h޽Ƒ@L*@3مGTZG֟+PJ Є-29?1+W4͌ύwob+a yBģ\^VAU?DϋhM\LMo?|E)sS54{a&vQ-A0h[༾h޽gx}%4vI(u4o RA7y˹]T AXq"}P 2%qMρGp؎ɨOB(Åi #z9T,@E!1F"+> Cڊ`}KAę !R1 9jcLbZw+5Ld̎fmr^F@`aT(k0"7Q@O !@4Hi@o aNe5|;)EGi]@O4j8k[ei]O,?skh|s鹣Wk3(jYTZ](FFJ{ B߮ Ƃd S*yPdmw3!) 1jmFJSjH!=="6-1Կ,4Y/ kUҟ-Wʔ޳;y5ۧaCu&nK Ȅ(V1ښV2 c !LZ1JD?3Pl^i>]i^$un.7MS^h>ws3,Uɬ;0EecXw7t,@gԺn] Add/q kljge4AiaR)qغMåvu#Bmig8)7R VbPSR^8cHO9B l!1RݺnGl)R4:hjDS)4hxіuʑ}F΁W]3_zkVq8F)Vy2wӀtrMB}Pg v3eR4e \C:=*84b81 (MO ?8*B&^m)2M bɵ -n!:[z~|`e"W;4jP@eJ'Jtrb}Dff,$DW֖v+jjwo/^Q2`5HX5[lktup9 6;urF/P.y{WA?z\ɅO}jl9N 6X;+4GFFjce(TLjU '@`[TJ2P_-8MzUW/دQ ;9ᢝH1cbZz5 zbDFt )#YlVXRc!0gda2ز0'0TK W3|PD2IWMb ks_ӔjMt;֎GC- @.w˺T" s U#J g%)u#9&0. h>74f=>:{)l @P.P8Rpg0A[ =p_v &.22 ɑ1JrT0L8gUQ$JbQj:,I, Ĭ7"wFOtb'% N+d\80Fb`@HF6* /`-,e1LgY%v(NɞYer$Vf5Wb4: SMf3)gXa!218LPe I #]  3`%9dN$R--Ȝ.BctYU6g!bEpSv+# X,XfB~odYVcS:rKV R6[ /.sza@ S\+fvV Jj9"0rP#&eI N niFUu!'hIDkQVBV_ Vy6MIV~ժt3.CQ_[7wg[fzCRX/naakFG;_>--nw7h6_h61x?˟/֡tv~ǚr+|#l(>)W[ʮr?'8Ǯ3SKAX.9e֞*ڭ*\/p).9v]2" $0NJ/Dcc{UGRlӶaf40А1:E1$кʱ[,!GXufݢ n)4䅫hNagTkng"uy@b*[ZmjezD(Dtƭ j 9Y.alڂcMbeB'bP*/Ӯ1Z7Sa)2lDK`g ϑ Y072`? NT\hrjN Vb1;} s S\q(p)Iz -FQkdZU~C, 4䅫hNU5X>g n)4䅫hN5]=Ӝj  4cNfL?p"([Hä2 m p۠f.I.D8,Ԝ Gei}8:WhՊg:e>v$A&1Z^?[2;}1k'АQ:߁uGAQǺӀ3d?kۺE3jRh Wd-C8#h-ՉusYhFC[ y*Su(K5GTk]-5Q'K!/\Etrҩuߌj̦LB pYeΜSVsO^x3sAT2f>㜑:j8mv!##n rq]Pra@`n\4c&V)oO|~ju(gT6=ӡn_Sxc11}=+;%sА1:EV>nAnW֕?s-RhUNyFDpe %\_nKeM>M7T+e9 W NtK3<_|\2W 3YXWemlʼnՉTD JrE0$Ql$8 l~ZRB' zgٙY.zok2nH֔FƙzZ_ k- Twz߭I&P|xA[_cR%o0X/y~tzLD&[ ѻߌm.͘y{wsV>œWwnrM7n6#y"=|{߈g~WQ2}6+wnqċ!}b!<|R0S1V+1Cސy4Zҭ3"3((!'NF@UFIʋ;&]:Oԇ@~ʤZS;o.})1]CFt>?ہW(8ϰQ:G$C6Ce>$ Me͗~={bc ɜ +,2 M$L wakNL(1 [€%y} Wb`0bYp>XzFa*vf~/оz@>9:~1\WT驡e-mLes8Fꟽ9&tR㧾rE0Ul(y-|8ngk{7|(_[GcwtZl|T[AT3{w7*r\'wPqTū?T58­BzI .X Xn8L(-^W73U!j w3hyg-] j@׻[ٵL?~vy6z{sw=rhZI~LTwA[%J@Կ W&d۠3 "$xu¹G6rOTMI4LJ;!*=~Iv!rK}]Ouz?Π W‹HUL|kVx(>j)+ݟ4 ;3=wo ՚hxѥ5-KV=]/ZcV;Yl\3w6eml?6k2ME 䧩?~eonVbg6-؊`Syy?õi {Lv8)x-A? hlSJ*}6N2j1.&\8ƘʍcE:YAn':~|XHJ"ϘЪr ΟѼZWeq ApZLYg>qv<Ve} ;sh|w~. VfG$]18̆̋ oYMf]LrGK陳!Md| 4 uBk;8brS&FJufT!!x0 $/190QIq(X)K# ӤJǰ?DpkF_3%B $U nu@ 'HH2a,dbA))gTh|c9i9T|$rr0$Ǡ;9SRE|.BN Z)U2Muiawߜ39Cp@&49UӸc%bng(*gÞ33l}w.ieËia|p 1eTv0GVf3ν51V[yK!-xᎡ"T5 TkQ)c:k e Ԓt!E32}jdDC- `N;`J Fgff,`\ E ,g*Gqe&rنy;T+:DZ2V&ȴZBbhU4Jt1ڹ7ZpҴ`T_P%D>H }GQ+}uw5B| 4䅫hs*nA5 S狖edWBZF*A-t izsbSЮ}2!kP{ ^ڰжx#qXj\7/2!k58C4G_pAZ9I ֗V/Oa|<> 9#3" 3|![eY@W@(-;hΆ !|< WtۤipC]=ƧyOh^DH9ڀ+m돻j~gݘSH PT._>mi5NJkQ1N.0ݕj/XwgaiIo;NcܢZB$|m,X֝d k:|jd=2qD,S ]&J ZE CEqRQ2ku`+Ι8"T,aE KL JsBTfu DVs?.WM(T{cD 'J2J43J1J`ԇJ9JHe-+H*:}Iڨއ7Iu9'ٻqdWT~9g4pI%3g\ ȖG3L?%[ԅ(^ms*5%Dh4;|]dPo_}`HثU"sU!+<7z.MJXrޜձʺۈ%L462Z#J.oe6 5__PQ*IOASwX Ҭ*CV4䧾:Jf泝*YT ahokTS{S8{RBo8qm|GFtp mQ㺬rժ[Jr1ƥիN^DEWYU.42ݟ/uO=d0:w\N+ve6l^|%Y‰k5ߍ'ww/Wmc<*w[~5}jc !yVk]?4cJrU0Bg5O5K(oB}JɆjTS4RY%cDS^`)V޾ 4X%sԘR 7V&TesJmC/U˚swPݬvħZha A{ڇAnt藉tzt5Ǝc&ͨ!!H |߼l+m|ȔqAP%Wu[luZ,2nYKƷC ?+$AR}-.=qz{|^M,@7qUaAzܯ6 M }qFg{g -:~t]06?"ߞ|\;78sB!tT/N3[+=`dHA^l,(%B1&X%C@\ƂT4v)=:ʏu0m$>*ۘ?q*rUKoоiruAGP-Zej[ĩޚ2 qFU{%ڧ%ګ[0*.(Ǽ%h)%!0)>CrzP{u~ʑu⅒!y+B1%D`LrR+W۩շNl~wd2/I;=s8{EL'\*7gW^p#dHn^smG? {uhlk:z1nOm^K`z?0]'☧($I@RVsk8Cc<"`Rzi>eG+JtrlW}SVG::G{ӌe=O?ټ'Ɵh;! aBXd@q(֤kSΌ65Ib ~JXXW _F;תZMѦ*>W\f|SY}h#Ǒ3N3R:Yu1^3ǻI'+ o' h^-S*?ڛ8'`Q{;#]f2 }`;]ڹ],׍oBFWn].{ត>>8~־M܎//o\}ryT+hAјù[țko:;w=-h̡޺?&n5{=H4Kx|\%Fˀ|Ȅ\5Z/V/>nG8`M6ڏvq?u:k*`L_z2߷55EU.VV=&˚gR֝k~gV[0o?]>[#8J¸~ng&Wex`S>Ƚ@.\dXg}Gvdv H7ޖ鷇 rASTVֳv\ՊTrݶ)@a6 q\4IUac"x4lr`=fӞFW1O=O \]k J$T1*L1Wj)YYS(r(E'Yc#X8[bVP14sw{@=F9M֭{#9@Dlۏ7*VIFo?Ʌz\eeabpR_89O} L}K ջ{tJ+Mqo-km+%'D++7pI!Rh+;QC`ZOp[Ep9pcSxTQS=q800fnRVSލ˻Zo} ƻqk6L$Cüď}7Ob"J⻆msvd99Nmo_d2cSH(I #IeD *6G> "Y@mr4.Lή̅?5CwQ  mOO `*xKCL5 Q2 XǠ_w% ħ=a΍q6K90~]p9i]$FCnuK!ю"3ija ʣ@sL0%(~+9 %DI$Qnۃ)jDiKG4"Į8і$J '":S.BiJX2 ăV%}j+0ڙ?EbF&JZKb$ie6$)aNĞ/?ʩ@gywׂjjz:1s3Q-F`%"7 /t00N<4M&/X%byfCm힀l5'Qd)M!KtljGF; p u8Wºm{V˽u(B ǽ ӔZpO=y+`Y5M"_FBոlCeKp| b0  k'քXF~ }n5e䏧wgNQž c6/\웫٣[.kGf?dn4tg~M׫w+Jg,[;.Oüt|q6~1rZZ{e>eY1ǟ_ sf<F]1vuz릻(Q<׼% 6(׳1ו?OK?Φ~Μ,4||8]u93KI.jt9 m Mԃg$ZL,C΋"pݼYDw&;R۶վe~Ov'm7{~jx-aԩ&hÌ""i!P5+C't>Ɵ8ƹ6έ幵\aqν K"0f$/ *pKe|ծ0 ߲]Ӽ58۾MR`'xM.c)TR@䊨(#]XnnZ'+ kX9f0me.!aU(9J4 W5wl _] \ЛjU[檣Ioм-O%}~e«.-8VH6R0 ݠ6;vQ+hI0(EUHfm5\Pz:J3ҵkoDCN%o60Y PYmm=28꽿؝jAH`xb~&U)by&Go!UaxKpqH41<&`UYԚ)ihyVEM /FE^B0+q5ѤWitxX,~O@I߳ ؍rSvZk>JiLiYGn͓)r- =4fn4_>_o sT.E(׼Uib7Eķ7iS߯ dw"]%rĄH3o3As1kT aYΠF󚏊]F<q3l̷4C?9g,8q$d\1"hf %ɹfs]җ 1qc<_L Fu̶oFb?9|(WQ4qY"?"_q9Y=6y#GbJs{KBڠYA]@( Bn .6۲ )Wa9$)xðR6rNT\ cOBq{dN[ Kꃒ>hJuN/j6\ĽX>a÷ԘkVJyNcz 8 n#8V#H lbY:R nWoȏd]Х |wA0Sbu;tXDqZ ^&CnMF+yMeˣ->t)ҺRs|H(}Zz-c3Ni9kwixnPK(n("`hGu ~0_UWϼ I^ET_5`?Fa[W>w` h39ӧYLOo5&r~VE_ Ȍ 8ؼ9ydȧy MNR!5˃? ٽ28\=P!TB5/G(@gR)p*Yfw4?X1s5\3pA-Y3{,߮Q(f[La5:ۣw͌u!qÜt>JfUbl +$,'9s "7< 9(4b<4WҡVkw5)D'V{iwb+k0&fJKjY y\gf+¼F9Nqw~>ZXW38KpuqI}9ОͦlsYCfu\]aò̓E 3}م_ă<׉Ba ʼnVt ]N{%g=wx "ՙV937>ݲF3N(Pa/;kZ"yn릛n/[N_"0/2,Nw"B+pFKjQb"~B9m 2LEIlobH.1m?IùrqˏiDXEne^)87==ўV⋊'3Zcz-sc53Y>^.{I> d]'qIo7}Lg%Jb5 !ȊSz10xQ%lԩrӜ.iIfxD/|C^uWڏ @Q9ئq2`c,L+@sb/e~ ވ '#_7wRwZx}=c4Gr#n gbRvKTnI6<-.XM.P2JgiZJ:;yBhoQCv;"ݩ9@s-RxH挦9EQ*r G共"3+< TVږ: T\EI^,ZC|}n(0L?Mtk2)̏mn_BFә5gLz\K5` Bn}&#C,7*zC8=¹3hH.TR흲p &T0DVC)/v΄2kqFE<_a dz/'Ͷu2f%bVKs2jfE-MObH0N\Xj+c%$N*fYTcT `,İqc+tU seQ`0D9,3R,Ped)e\58γcikyDB(jQXj59`C,6VbAa!//mĎ_V}rKp,"eªDpe 'IUjdX=sd`L7Kޒ-bjVNUBTQq(bzniKDL@-P+n cXRS]JZmaHAƃ\2X9  z H8k!($JSqZ!&V R¿%cş +0%@pl5Q5v gօP~&^nyI_As}(xԷz#ͭ(B=1l_B&ۭj'{sF!wKs]t 3kyj "P']OSfͯs . a%$=%ec^z<S};nUQ ƛbN*NELU[}/\}We' -z+;vbBa3!: UgG޽- 9:r;899 B c^Ȼ&bq _$WG|$ T;z=yቆh{J,S-{buPYE l@03s\mB}FŸn$<\콓>m<WT(= e yɮa@ZUs$6BKπž1nlq19xٺ ~-ɯz4{|㶔cVe||=)Vh,6 5kbZ%]fYlFj`xb>d[7kl,c}ׯ& %şz 7a}j^Z)%{OʳoIi %nIyKw<)G&ڻ-m&h/I#hr9M^F,s'@VKهȪ@=S6>x}~{fv8{ծg;7L: Ev5Y*B?6UOuϛO^ilؼ5s;񼧍^+ K I֮N#^[ m?ZcxpkciAvB'[oF^AٗgM B Ø^gQ7{Y;KO'90ʶ*e~x5{-k~R7u[w5k5o/4H|>)XY. 1yb]vޞ%x1D_gpi\^?f$d%}uCIfx>cڃT`^AkjzjpQtU=idˆ2wCIx3sixK=q7={R=9k^IS\MXPvj/iuNF/@t$yW Ϟ~$iMzvOh2H)8k-8_ߌ;+aڕX+L.pp G٬VyU@r ***N킡(p* Z3m$gA?6U(p$YBoE*,"܍ЀV8]t0 6 q䣛-Bá|t(HX[8k6еÌb;jooӝGvw㥠#*q~hYȗ;Nэ\CKŽ\qwp_6?<$xIKLgvw'՗7SemZzDNX>1cDvj)5za)^bW;<xQRui.s#Rv25DF4B1rڈVġb2q!{O$/vĘX#V9HNFbif{9ш}c}-̜A e>VH&!Q6l36,ӈ=m #zss:teЃGs\q*| 9QcNLd7'd!_ Qwsu$dbj D N.,hW&=}UpQʟ ֐! !yh V;_mp&y\= zNʜNʡk N&%g`N#`<9HKrCǧŸs+ǓUۺ)/@% <_snvg% vN'| KՆdCN<6^G>Ǔh[ҟZ/ڀ=DmzDvhcxR.9=mVs5NS (o\zs,2mːc֎y= gї07d3/)ku{Mb\2B9kf}W> 6MP>US^6lricͦ2gܚe)iMI 65cܚ tKDKqbkF(64}Icfxoftr(8GVPtrr]#Ol0Ӳ}^\КpvZ$42s4.O^q ,Wn[}dQ1df坙_—]՞M_zEG\W\`x *q]G%>@(p'6RXĮT2/F~Tl& yJ~zԿ@YE!9 Rj?q bRXX9oɘ!@>Oo( B(QQTJ'\UZ.,V%,&3 ,ԅ*wWT1 \WsQYXʊ(W cy,-~iŔ\*%Kxd)GF)b)fjSbņc!*8Fm42!PP ~XH,p$78J'"kհLiYE@'S \asbKV9MBHfm(+aE]Z("`TW080J6'^JVU(a*JI%[`<[{9s9[`g|_Iׂ(\X _ d(zso~yAO}m=eJjWXRbf5w&@g18]S!xcG`^.ޯl>|7EY0nXH ::D] K_^% VM5_/A7TQ‘J",< -YQ ^+xQ* a$-bxH艧nڛ uh]$0N+"aee43JJ蕬DUJ& .1)Um3ecqvLY^5٨grŪaDz^[1x |__TT~uGJY>|dqfnkc[UGF-@VIV[x=JMf: ذ^Q9\ o1hYz4Wo~vny3YAxG> ^_ϴ} S}4ǴN5Jyaz:{q8f tkX #|Zc7'#Xd14rz1O:8^rzRD#)qo7{M_R] UgFrRLw{p~ctף_fBF[Xlgh:[֋FϹv Mr7=/BX˙[bٛpJPtjI{U~LΔk"In.)>Gb-~q$E4Hdۺ /<кbDtB8ƺ[o֜ ՙu /hhbHșhLiuӂbDtB8ƺpưdY⧆-Q1-}n46n5ZƆ-ɔRoY7c1hl㨸[| p).\4AB\DCd ߶u5rIn4 jqT-Y޶Fn["HșhLiZm1u Š脶quKX2ٙu ߄кŐ3 5C[M*b(dQYtu`.gͰiV!ȔW_k -&1-a(pDg-|#qCCB\DCdJZ9a u5EV!F!6*$+΃ֳ G4*D3 :ujʺvIu=&Ňzx5q5I5ka>˞Erp{ء׊\[k+j:UR)$r^Ë )96T\kjHPoaJ\ߓ{jg]ј\ kAbj&)n D!.Ӛ˜c9戚|13Gc9樚@r s9W\#MH1sTML .1$s9&`r3s9瘣jgdp9f&s1G9sAzM]9ǜs4`:{g8]s1Gvꞵ9fN9ǜsQ5Ah<3'uB1EhcӜc9昚@1Sڡ&Y)lx9fʕ9c xs1GNR:s9&Pcfc9樚 Oݩ29ǜs15#Ncc9樚^c9 ^Y5!",QԔ9f9c> LWi"`bW7LPb7oaf Z15vXŸQKpS8̒P9*U@VP R%JV[lɩpBS|FAxERw_w5z 2]|Tn:_ڶtKXOlv+=kw*W+4"HSE6{7+*j\;Sۛl~qntਜWXVKY0L c#WR¨D0ݘwf~y=).,.Gi޳6m$WPUlK%v|9mi+H~= %/s(q,==@6 3 =>̀QDo%Q& xLp({PJ Xmuy5#06" XN9^r^ɀDV*֬pN(T #PR0I$0c`Q4K惐5n-LJ(OV'YL^F)F`e ^F@+"$! k =- &?Yx,`+ rLڕ.kI"E'A7Dara$҄ 9OtN0F4cu4v$1a YpdXņHdF (LÆp/cӄf\>p@?6X;I'"Xh ;4^]ܼ20[ہU$ XEZʴG `Օ5`eKCeKT&$u`Ly+,TQXa&X!L LP3Q{ WNzXУ {~+'ؼeA};Z#Q3<~Bwq) G}PM௟px$^ٌf-`YtF0Qa `cd7{HVx8xPJ5@PqGҁ _p0c3ξw[_ƯI}x4V7.O*yz5 . p2-:N F8/8^X-g~T$˻aWdӫahK ىv%xa2)Ǻ1CG xĩ%NH*qeFy$YjMjg,'Vk 3%Bvv~)qX-?ju|[^T;Yfn 5<}7*&װȃ?9ϘIg<&lɾ}Ԟ;PU<Ԩ{wr^8I̜bq2XMx78~Sܥuq-~>9ֿ!z}'`Js%]YKшj*f1QQ&*sKx$=d%K$a~R_s9s\OeS(a<˃(͊Wp =/&/G#d9e|@]&nؠLD{C0^go99_wd2I6kfXL|i!`g?a#qF7lZջ7~lOVA: o'wP-A-3O.uct#{$BID=%S %b LNKx],MSKT]fóK=|4 ĭ`ަ~4;GkJ8E&V@!p, aV=vC%|QY@!0 O'(`$vPDb`H:Qvrx],M.S,ǀۓ 6cwp#tD'O߷ޞ򠦨AH+HqwF{u֝XxfOꁸ`EVnaRt(6j%aiLWW[}̾[(֔w :{Hf'e@šu \;i@fW8 \ax~׋⯠f=Kӿ}ԇh`goזlDys1n*2L&ٜ "I1R;A$< MI2#H%NħLz@>5ˌ1ӘKެdm?o\8* @qDhET-CQp+naJhp;r{S:[D.Fw6o7NM?3A4NTл]L;Fig!\\YW߼{m&<ޠ֩*B!Nf;k3%H"k'wrz2A5"tkv*7m\}܍D1amfYMr|RZ YC0ӥRc,LPkUΆ5De-pZNVnZ?|Y#WH/Q]e-~HEb{7[X졎dLjܬ N HlZC!to|;½T<8×t`k)c"WA`o'$_{Ѻ!C}ԬQfDj̏ҙ7֨nܻ6U\ +EӊUV jYYrYHRR{"DUjLT)wDޥ(}k:H+l_ G:ݟNZvRd7}`C5P2+9aGɇYqRo ɇ )ΑdjS^6k c ! $\Jw|%P eR3考0Cd ǀA."Q#EiJJI} uP)l^MSFOѹTg@0-E- '1oVA6x:Lc)pGI{^ER Rk:Axfl21V/.\eC[^MJ@^9!/+|'x5ZnsA]8L]N>M.= QHpTBXgtը 1oGiPIL,"Һ%^Yo& PcYJ+ٞha=?Ӗtdʫ(*:/-p |YߑKMlP` ZsV(06a&> ,5G݅I1|-GrkۤBʩ܀Xj+?E\,Sq#^Y-󖬀??`?J+ʔ_2I IA*)7cp,r^E&pT?ᴛFOOoʼn 74)'IЃ'zBKF-Br-8L#F'+W0;TQ^D39j5쭓>mX1D2'Le6ђRQ{z{/Vk?&`DY2bPҎ`^k_-جY;5W?O>pW> ,"nx'#ԡO~?lMW.o@%aﻘg)RY">sg+3iSl>^/}ί@nzqw6겚0{ ONSw& L?~v-]&U>."R{? 2a.ɪHNh6S* 6r_&J1AcM?; Zp&{x<(VS`g-Tpקĭ{0nMw`BmYxouc0q5f{N?rN8j3XG10AȾK}`;ֱ GM}sˆn7uev:xt8lKֵ>>\?{Wȑ wTއ=vzƴ=~ȳ[0Iu"%/1uR!XX̌/2""k*/~x r6@~JV{)AuxӕvA(npgqLs(9OkmCVmM٧9o:pty55 e*srusDt T=DFZ#[S_ݥ@- 0#a7 ~eXBH_..!1'W|f;+ c|H郏+3Ȍ*d/#A:WЮJܧ6ˇHQ_<4⦕yeۇaC 5kTqmyPu[u}¬2gU VYLjk{3ǯJkg0@ZqC3qpVM(yٷ@jg 87*m_9qhH)4>V L)+JmJmB1_oYzw;Yt]\^iдt ӏ`L`vv~0oGQ5JFQ݀fEAVF-,{q4hGFZY-u#x T~Iy_p+ujګ;e/uY}b)vb4gRl7J~}]ڃДjR:{i/x+ |w Kwn@\PLӥOFC=dv1JfǨlv53ZX<8Src.p,5SQaE41[C;׽4`ΨdaYBmV I9%UX BT6צu^Κktg)|6QSvjGҰ"RwK7_**ZTZnR2J%:յ( Qr2*rd|R*b2]jKM(0J"8(-J1rBRM-+qX@jocyP|,{l`R( aD"ZE.q !JAu)`ex ++LY Tw:JsC-vW^V k@ l8AH 3%C Sjk0]S)))eQrf] ˰rP{D/ )@1Y dDS Qlg4bʄ~5 &q0 g&{20b۱Ō4Xb0/:Pϵ()a/۞^Ÿ i.(}rMT#ZE<쿀 pG>5ጬ~rg7^惇L~J'FU4D`a6j  ΂$ _8Y*I>Ip-f+"-/m{v[1*Gd-Nyck6{cFb9|a 8/׉!S2A(ς_'9G`l$,'(hDh]8qޮآ$*k8(lDEA'Yk}$ m \N^odȨ@k-osr 9aoj cE醓ʇ`Pp2)dH9xJI:9D*{SJ du؞j,i൓ZA611ɠl7 &,EȦBAШBUjBKTP( S-Ÿ Uz,QE:LK@\n &&PKʮϤfxpޣ U5 [ۏgK:}0r`ۜ7(sCN>Y\*i[<عrG"io}za^2ŽΜ)C-;Uhrl;ƅ - ǠV.Z)eD*Pmm5.<7>ݯRiIBRkT3R-RHUS1)!Y*Շg3ؾE߇hƳO&.~|>mY*ǫonۈna tHL]p+P!acǚ.>fkIC^k<]7/=z'p]W C;U`d!I*sޏhG1qnYvar+gC,_/q5ĿQɖDHTRbW^B:Ӂu\KU#/H4hbk#ƈJK:*`ц(pH2e# ;/1{ٛ;Ph;uoՃ, o̿.צ5( _7R dQ*Q4T I"+:j]۫Z"پ2q5}H,2o`pw="G%'zry'Mrў"t8ERk$S!V(;)ۻϷOSހ$?Hm${aIlðcqsb7_?mONbk0O: w {R# I1>zdHq;y{zm4ߑc6N6)EFNm .MϝG,)Yۀ 15 9,hH-)Yvр\E@Rd@6Usӝ}DFeJSIjNJf'h]:¶IG>9J%ӥfd Qc9J5heū;|R:,~L =O, `X{,6Vk 7æ./Yza|aGHYJ8)xd9,S>B˚ԣKtl:?ݧ4LPz3f*Dv;U[D⤩HN9Q;+J,U{ KF~r?wsInm`LlE_6 ~_~v.֟'c>rů@Mg'g`4Rif#f%(煘␯ExJd"%ߡ{J5l\ϔ`u7y-^_/rZw :e-fRvrʅ,O a"[?]9Qw&]Yd=$#i&U@@mgG`sz!>* 5-E)Nc%*F~qޢq-EՖ.l|?Uh8ZI$WsOR(KI/;3 cpKw}/e(azZ;2srڬlxͅŴpjD=o}nrŃ*^/^^ۋC̬kgJ/Qd\3t`jض Hw&4sDw fXSXv^_`| <;3w˅GnbRƇD U*7&c;~snB]"| kzeWh*FA}{V $fDC!𒗊p3"վǟ `Χ(Mq0ף?5ż? ՓC$:o@ŸE%Adip&_mJk^űt,=lc6Fe)-eEc-t._ '!S;OE5 wG-L[ B5Υi ̳:&6e.S2V֭ڭF񀅗<ƀu@X+>0Ę &걗6^x:xEK9cS[0B1-M:?1Flz>ZM~u0ԸZZ !牯+R;ݒ޳KxEcdJ٨nfSXTE4U~Ɣm海m CBg5ю$o 2rwԯlp ֥SfekX'4YLcn,P 4}u.0BhFlcـc%,:eĨ1vH]sFm9 }:1SɿF<#,kQ%-: VN,1J듊1N> a{#-FF),YI5{Y9b-JRj0BS `ķf>.n).}KW/Q%t ǁ}xvv~0d$ (F nr}7P*"aKClԆ(r6- wZ{-APBd{#{%}%.0**;eoQ!V *_Ufz%k_{5ȼv'v:.2Opw`eDl'E(Qգ$GeQ]f1$` n\ŀ\K)I\ "e[x,T~8{SJ/g7 Wob}+6/cLQ2ZJ3)uu:zƂZ~XB85gB%J{E,"a"1 Xxv= ʂ~~OWa5iQPQW(VNL ~nݗQZ6DG {a^5;+ >#泆-.Lng t)VF vdn'OTkdA&m~OIʊX76 KSe˶Wˋ$l !{4c.paI(X&!Ф?K㥓>ƽcG^Mp0GJ(s6ii2%8א&S&I:XaxwW|zG,߾/Uէiw`]ME"PJdތ&z/ɕK)t `΂2s&&߮Uikd/ߙ/WMt̀13n1a)I7`mL Hu [`l}L]y41.4tkO3TTGSGUK/Xlm L83:Cgc9{gw _|2ثUJdT0ip+Nlap Osi!%76G;g~4Ydɑw0B_>#K=cdX[ rؓ!{qǠ?z*s6 wa~c}Ͱzv sJ6}d-v_>aC68N *-L3VRAU |v W 6QMSz/ZeLeԨssBAdSo1}lݨ3=q0cX腭 D:b<6Тg<&HtǘoJ I`t>FZhD6'7m]A @;ƛR%5A#B>ƕvlWG'_,ܧz&y^M Lyr^EZ:A;2Un8i-dEbP09VO~:4TѸu-l=O.…N m_d ?OqAM='֡%e>fOwՔ`n.cVBX=_:=jLl.@WmEwT`* wH k:lm.!gthJ!ȗ,ZyC}@F,8Y؀hRr(5L3O" I $ 9$a[26:ǝR\fN'"ZtaLi7mʓ8@ ̓nO^v4/ˠDK|삺j4O?s2CziZ~>sǢ|CpVI\$pE]pwDBJSGyԖQ/4PhRqQ״_*/H[7Oھ|Kua=)}@P ?;'P{r2_bF6IicoaϣA\Gsp~i2iTdL`6M0FYވmVU;1/i4mo>vH@UM4%J_*ɔiS.HHM#P@ fZu(n֒*`J"xHa$ h-.RRt|GH{U>$xvRNe;MFvZGxM- @ǔM 96B3  e`ܣe$ϸ-LRQ@@Uu7h9ĶD6_~NeԖ駲tZɺT4o?GH93>y9_']dz>73 J18_1s ?_iQ}MRKU/2_f|ࡹ%&RR] B!jU1Q:4&pƅ6_wHR;zЋ^$?3,|HS8 JbshPDqAMB(Z d1RaiV/IIjQvgҙ(BpsزY2_Rm8<\lYcl$5`ΡX)4r{fIU f\ /""q8}xlOSKp_ڂ@\( Y 31kPaƩէ~j2S%ɋ~-g]FcpTz 91:QgTY_LTh[k\rWw球ʪ` aϴ;L0to!rPi&y~j7噍9jY_*&9ţmܦ]j2nz`Rnp4^^E+%/'6Dƅ@%~l>6b~Y]ɴi*̻R@y*FWJlt{Cp |@@E)׮QPGM{5/K7XP&ddPU*aae¨ J71Ȭ`?ݐi[7t9:KkR䚱DN@DK)v3S\#kȌX OvPd2rI fz'D\J1V*det!: F^j`C)&렞fyj<]L&F ;2WߛMKT\V`҃@F)6Q ˈ!Yb:e9tyҵ!1bVkp:2 "Zp ^a@X`2JM=--11St]{Ǖ@y%đMCpx*"A2h91 ĥ<AC@AJUIYd&p$N`x-eSub̢9# B@m(4í` E U=m0-1A )$Zn ?$P^Eoߦ 3M 1{(7Hߑއk*Dي@lȒ`K C)a5:IMhA#"#qv)b Ej1QHgi7k7zsoEqe>ܺ0ʯDkCq}=O&/&8eUD!:{[n$Y?Yjjt>|QkqϫŴg)k/⭱Lqz3o_5{sf* -4P%K;uf- ɘMmC$6ފTZ~,J/ GXcmayɳo\%[ [U-fAO,=}H1`O3!Ɋa,{6q^W~tD#JՕFkOF*h:FL9Q+5CePW s}tڒ a^}|{2{mM&>Lxex{&YOd;Y!$"IE/3: kmVEb2cg_ك`s MfayuƷHLE[ՒZ6[n3 Gn5_Ud]$8ɸ‡>jx[-E5r*>;tDQ&cFdΧ{"AKb-3K" AsH煦Vym9ko#=]Y͛GIt 9J_pC!+,e`ҺJU]eQmb^4EsiZ^eO@"u7\A~1~`Ǎg^S??<߁Aw=?\c" %:_]܌aDX|!#8ԋPwY b-=>Fwg+$XS㧪3 NX(BPde…zՈA%? pbə!XŋĚA&_:q pƼDIC,R"`znKC)p|viF꫹ÑfB2P[է[hSEԦ@z(}b2=VY~wUXwg_]痳}PQ_=YZ!0ǻHWt*gao݂&kz5cƣ,J$P(3`/9ThZq,1fRQS"0Z@3zLFfcgJ#r,1[v#lJW=v$!cb J0ߩIXGljK\u! ɝ$9[YEm+puvc_ Ctr%TMo6ՆxN!39zkn ONKjk"A{)E y⺵'RB!0707o?5GKtǹ.KZbU/M{_ǣ~w-GePtb. ~98Q_p|rGq~WBfQwU O?5|K8 lpz4`z:ΟzϦF'VGe׹ 硶kb"LikwW\VHOg.'aoK2[϶g99c /026#9T1aJ~iUס5C Ьx#vsy=jFp\?OG?N'1؟W؟ʸdm#Fvmd|G{l6e7,ݻZ1c7(,): IeSǗǛzy?ybCbmMElP̐TEG4CH?bFV"v+J*|JB+59UJ ȓQn&p>L%1z+c S?b~: paLd6<{!-ҁYOn=|b9h,s"FK?:3_OFxbVySCi_5Ig|S5˧o_jN{;)W2^ѹCbZ+cKSiZŊ8p-+VE2e˺׳X}@u@ H!fcW{xQ$.-$(, \lzŊ܁*V~sPvMҾy+-SRJrA޾扡#\9Sf$-c'(( ֎{傥yh Ja90 x>ss{,&燵 gfTqķ]iޱ$45:6O)(S8&Z잊 oq*aV= Y@J00*0|1E0sc`H?\SܧM%D/q4pduOm) G]m)Uˮ;\fN(bKIǻ<\mT `~6њި`xw].htAZsST( 6*bfG1۶_jLj Q_}/ߘIrMeJ+1\HZE9[79,"k.> pı=yLYy NIQ蛏3Ū`(L$;Qv9sJ ڟ֋0"B=]?8ǤVkB 2YX%\|"$V”E6_ jDpID189ީ@>v̺=c|>jQQN@؛JK`qbo 띊01;M @x{NDr*s]'}8#V#u?֢vc(}M [˨ ..ಮ |ظ@Je,ǀb*ŽKkK,eK"kxOPRkR29/w`$ 5s1W]F%j@Ք@2E)P  Uk IJpQE*7!|F=lbۦ`+yXr/L@uqEQwsRm/+3F0a&)6n((\$7^<=8褸q+֛v^=w]M HI{ h$;1 e] η ^gt7{h1q[ܸ-޸ۦcTm.SAoֵ3]B9T3OjVv nSMGjChO>Rc=EX miXB-,VJEsH煦Vy#0O^ߡZRSY^b~yU.ߟjjU vqލ~Umzg?޸Exe~ 6w:ۅ^Wc_ė{?>h6?m!֘ko#2XTZ^ؼܜ66jϷ{ҳ.w5V.;\Ddq{v SnNm.)iv܉ڭ E4K?ysW#r1H1hcz^2v˞hvBBr=X$d^.gQ&J~p]OU@@3~wvLo&rxt{{"qb,0bK V)ƽc Ri_#%s™`6XF %Y ^zA*0R`*-+0 9 *Å&F9$լnr-k%悈 @y5 ʩԩx{@R՝4 "V#  1 ϘⴂKfRxu EE'mzXs)3ďz*_55 ژ3ě h!| L %XJXRI\>4$pM1n-|+lɄBm Nd{R򲮳8N 1+@P&{n+&ws?a-w~k5P"pcm7?53 ~bvTѳj3f2hrv=?\c" %:_4&>6X5y^,t8qPw )wYp sY[Sř',L!\(qQNXBI]h| |dM/anҁ=2$ez1>Db|T0Ӹn7O*H,ׂ-08PAq8`:{aRp{4<x*" .`}g'Drj]fC3‡|iN±$Vi`c!Z'w-Q_a_]K42pp|),_ *49.1(E<'%sC=%! R$ |?B4|"V#Z*k[`u1xg6Yi1 "JEМRJ) ۺ^'RNJ*yO 99LqXV~N-Y=ʒ]6~M]K!2ۋ<*%ۉY,%Oa% Y%뼻;at1 ;z뜡5pf͘j'NuVATQ8v]->zrHW'g"UTcY'pJLqi߶傧 TI;c$/z;el3Ʃr$U!h$SX9/jbGzokb!y"UE o KĨv |z œٽ-H" ;!u4Ed osAF U> 0|ZFJA tWeYUv!>8(1F`JiS5y/5E01$LQ6d.͑F2 !͑F#~1~B?4𝴘 ފb,9xRji3dƆwJ(h[Dc(HU@u%Nugߊ?~wTnz1v<2]G5PjD{|ea[扼ݷROpb6SqvAW*FZвC'ri|z.bBy [A*rDz5 VL#CƣxD~Aکk5PGmeܣ>vE"(QF/^eECB bD[9q1KueItK0!D@Y]$rߒ N CJ9qqA%" Q;/IL1fdeBM0(bg4(<ꑂ)/z!GVS3/\vtٕqş'Z̈́UܪI $>htPrqkӝC&AJјiUh ez;MD Nάw(OpǠEQz?ipՊx`~|p1ç(Úc? G}(b̐Bꢤ&y/d>r} ˚2ء>TəbA?{WǍ/b|+VUzý/w_0lD^!șHd_FnSbzꬣA:1 +3Y7tC pQ)ZZy 5ZjVU7ve>Z rkQAko1]LA_drЏa!/D cfsf 9ryuCάRr@vy1ӿ$ fGs) 58hJBsdBܗT`Ǣ2Jhq񈄖&F&O"Q&5KƝܺNt^E VDP;0Nb{51WG޿ÆfFJmTs-3PjX.uU371 #ՔƒE,V։CK o3 7Ũ)Td]KvoBe呢rZ+Zj5w, W ?k: ^(pk]\Bڏ6f8_r<~#G/L0NtB80ض1zh$pYXZAA@6ɘ{;9=S o*S^JbLb&ir\m|c K HR.T jH ;aM]#&Zs cOi.VoE !IF+g1S)eDt]V8HSI@6Fu+D M Fz$ NG\X)|0 No)9-? .ipI'XjhipVdY0U%g$mj?6 ᅫ,Qyg!@_z'PQo2245 `=OiYSTY$L6xsFZ<>S AVQtjdnB =8(A#ePuaoO5f 9$,@ˤVhN&371RSw 2`OTRJ_>TJo2Vkw*JŌĭo9xV(~}# OUydlguD [VrZAwt&`nC|#ϚA|u hrR/C^i% ":1K+ww yuQ%[?BCՠ˱駟,#8+~_$MIyFo_yVZ =JoGhI`;{5j N wv9sЊsCR 2spz)!9}7ZOyr."1=;f.c4\c\r6nRO -{ƺVi_wz׬zלѻ~!Jo?NZ-ޱ$I6Iٵxx#hjx9oo1>f柑b$G$˳,Aede aN 2So N |w')m7wi.mߥvnE V9ݶ,`:oCLΓ0!Q:V*hD[}[PVʊx"E=*0!&O}%smLu' v]ELzm, A@Pd*2XOҀv߭RjYGlq>PW73]ym\}~(O6n쪮2A3HIg` cWB TQ3F]]:x,P} W\ W~S5Fݯ[9Zo0/礒Ngcί(2ǰn567'nx ޭөFw[0:b6VO%ưnU6Et}Fڜw tjQD"I2M¦RQZ HSpiM@n:q5u& Z;i[0VMmE|e5Щge0j!3y Q&5v ;UeHY@y۸@g1/l\mPm0Z(G! yV}!y@%ܦwiU kf˚_&D r6fOZ[xj:K遊B P;]sBKwzX#e9)|uqvUxx#h㴓'6g~}m:K  aã\LL*Cg^Jةe8Sw{ۣڋ_뽳jj;`e'轞׏&ҽcv({{녧p[HK ({Ú4*PR.MU?eF }|c? tzׇaf?Rn^tR%T2\*r.>tQpQ>>lUEg8Bq+D5̭b6bк/f(laeeg(;j|ڃ9+`֬D \9lhe hG|vTT~Jr{=u/׋o2l:KEGeb/L gJ7g>ܟ@m80D8x{c<絺H'q ?M:[ZϷIl v@$(Y9fwl+ 2R2hOl25AyThaّgٗAa9Y Y=Po1(ʉcxnj)@[#=uEZВ2ȷ-F6@C`,\`zno8luW|ϥa0[!UȦOЉ5C&ΛBF@v^G˝]W̏twӭHuH'W9J60|&p/#]dfx\glgmNM4cכfuZD$eB(ENư݇)PVv:.Whbg**C"xj4L$k8*_&?2~uai@ .+Dr<7=4kB4 s`֨)&]xsVxk|4 \'qlMt> B9`*xYS+Y^~~6c?ޥF.$T^,FW JXnG7%y$Y2 _ki!y^l_H# "EZ} ީuZHžEBEZ bpCRi?sb ϸS`gʓAaV1J+Cگ(r)Lf8W&ݏ_O`Fh)dZg)xV)}Sah|9ĝ)L gP K\qSK~K yN}z@|JIzL||^tn~14_~)2 Lp8RJZ=V i0Xf7&{EK˹n;hVI-:a)tDOL η-8mVƲt"F)-)(2q&fJkpvY8nt6mB+LTQu=X'HAFaӴR=S fzsnL5 Ɉ&xōD]`3}8(sdh MHYPCaفLVNuR;[뵕HXU2L"Dc̚[Fʹ2͍~sUĖM78))~BN[A{#U:ZachX7ώ+%Q Y7ȶ&ra$3՝$쯱]Za%OZ& Y?2aQ&i y(|F%Ývvӳk0FiTRQ|-.+EI߃bGo2k28ubzo;:}`~է6BL@x- GK tIb˸lg,/ }wEI[rq]BŨY>;i>] ^4ZY]p&1]˩09"*ʮ T0PՅG竬Xd6ODم’m!p. $2SvEa_`&-+%E(E@Rز$;Paw,k%e*mGmw?Q =8Qs.%na6%lw)y&b\tM܉u4\]yu٧tq>}æ_}z3k5}}~U߸`]zy)\o~w4_N&(J.Nǻ3nmW׸ n+1{l#k=[SJ[KO,䅛hMbSySnN7JۂQ)ټ[=OSwB^ӏy7 zZ rL%m]nܙm y&:ئPĭ,wۛ덟ɜ M/>]}qlE+|ͫk^_~pw~6Yjw^fVT~SJe 9+&5)5_W3ҮBI*fƗ H;щen̼WAΘ _e^< _:ɽ[.s=_={?⡵y5%ВҊNfT", pciL'X3V]n`óerh+er7Lӳk0F.(ji>W;'- UYnn{daД!G/M5`)'dNZZ^̅c܅< ν707t9{$9X?j#jxh>,/2Lfk5%UNr5QGi:vJjS^ϫ /1w^d \5טּJO/GYOgSj6]\>\ǞԂC(1KO=^[fېKE%&͐<[Ah]Y J Sck>cゾA|Z Q"[,s&'nF@Ûo>b9+<]?>WōWX^H'HԌ wHJL鵳*u`k~zI$^7s>AiFOGv="\ Ήͤ& H2kT9Hr` mfqR"|ܵ'X+P P(VZ  .!=%T(NUEt>vs?2MiS 6 *q0dSE h )MC e86[8PKYuu&T-N0y_W_Ң5dQ{r]AԲ%c\܌g]UR8=x!b-dgGbH:9VI LGZO8| mZ $HN9)?zq) #Z)sMɔQ@='n YDr!fs tZNYz=<4I# NG0tў JYF@+CRB}p[;j5zی@o DE.6,bMrk1$958ܑ%bs i'išvEeQ[`rALݝ kB5)͏қpk2!Mi.[m?hb#|Ԃ{\Az(U;P6Sh`8m8euM# 50iQp3$TYH?Y<;PԸk3^t&LtKv4^F*]OjʘSЮԂoKbDIh>Ocl aun3>+$={ŤV|9CG{RfckeT43=rSh7i~ssBƸM{[YJAM'$~gEgP(>9ᵸj2?ل K cgqmȪѸןBtLV=A4jbqcFﱗز[#wqo 6 w;$9}PN)vL&#&8;}8B(00Pf5iDwA(F-5ӆg** @TfO@Rxcg:UTvtRZkjO`rm, Q q ҂cw=Fp)4ޡ4u8%ʘ^+*a:%Ƨ9Ř:y $rFJCLwWL3r`-=|:-fN+# 4!D*:&J1-TlQ"Tt/qO뇮?oxz6v}Cp1;-?s7[LCoCl~}ܐ-LGT4?|L㠿=u_}7﫪orV2l] Mթu>+o]:ps.L@䗿4;߹Ϗpxe")\c SϚ=B:P[!lP0ѷ4"]_+"M,-U!1*&~=aWc;z1 <<о(`Wd8ƀ$ܕwӺ3=ߠ#\X1zӺDT ?c/3|Q7{0M=-b͏*,\Z=/C4D'XN/:r VUE()ɪdBorc\߮-oHef8z2 `#( (r{^O TLhtdJɪmRq.!Αp)Cun5[9SK |{ #Kn\M!,J:TbaBc嵆* _ܕ2noF*k igFv*ϕEFɁ1vQ݄֫I\dxBrg95Ԓ,@ j$Z%g6[r)b G}اW%^"R*\)fRFI*́6(@._C6@R(L3Eg5u: gXcS*Umu6 حBoq'\Y|5؄AlRi/ࠇw̬/oH>팚ňN"ۍ.0!ii9WZNbJUOO`(!Sc[ H/-iUv : ɫO뢐j`[g1yDP*Nk7f2τ~ͅh-p3;2gkB5ofR̤Zl(ЁO~^gϫ Pg/RnR<]1'6l}>rE5M0}N߇w~(_?~iw?yG~׳38ł#JhZ#$ƣ/=_*!W?sO)\d:SVx)ʙU&w!fZޜ`P5e Q Im\2qe&+amcdw4Aˋ*| z_'HwԌ wΔJUf 3tZ̟8yQ% 20k3O]3Qа6sFh:)4Qbrŗjq|X/T`rݩD*Wcp^~Vg VQ (?^7K3Y#^gbՖ޵6qlۿBun!~o]bHL6\ IS~:, bLݣfF֌fջw^kOw/ AB~`Cf2}B 2s_uvU话?[q?Z&sй='٫Ӄo'{gGov~7Рh~V0Wo9;wh| o~zuৗGG=_5tw}9 vИ|: .Z\gpu|5^Ec8[ Z}zyΔ ~\̟/!£;6k~gi|쁣au%O/;Nݪ,:eMك디}yvvK6?T)_k;/acԽi̻yQnǧIc>촆-msu2~| bb8=s`cry+pM`/9KyNn_ƅnj~Bcև^}Sk5V{Պ]vȏgx χ??owA>j7]p0#$n G‰gl Շlz>Ɵ}s~׻G}Us񨗧S'o#ot;'~N O\ aJ?δGQ͟K{/7ryϏ`@e]ز{^b;Cpqt?u$ڟ{ F<_4BgZwר1RQq?uGשp^FedA 3C|Iw+2^2NA:yx:E!rBg=ѱy9g g g gr:uVӭq\l{ |~Fu3  DRFy8gfgkw׃1S>v;oncsqkvV]n͒?Eb5!`QJ { Ndie*HX4+"vD+QsCs2$T?%8Tj~b7e`ۤ)MɛAY=BhmhDT!k {˰€Xc&he_jv(aEk2`6Ľ%2::e U!Y7Qo&fRo&dS(EҦ+^KIpiPR G1GXWN \"EW-/6T =u\6A((\Bs%Pt2"7VH{"&8Tǚ>n}UjXǚ>GHI!(= (X 0 rI)X` #"h\ @!z w{]LFOF5`k4php*wIN4K9,Dfs)[@x;m>&@XOs+ .,s[f I-<ڀ} k@Ip?}Ӄ5 a k@p7wB\ZŌ`2 g֛iDG Q.$%'OՌ32Sܧ-bQ!g5=.95hj"Qc)QJ9Ck0.׀JZ4b*G/Yq"$LZq*wX$ ZqZP/㟔.E+B,FΞW>PϪeK$lҠzUA'JYRrΒsY%"UUAK_P>o\qVg`aJ EAE= w'9%"sǒChrBh%2Vb=Ȣ!FB2x AԕK99s>R;Pʘv2l4=M-rcJ7Nyn=Jm}v16H>ՔøAV3!x yR& JY,B#FPa5:(`6 ,Abi#U‘HrPRI90/ 2z1"`Iyt6t0J!&*fԬJH.JIZ`yoXZWEP0~ﭳfk}~PhqB xc:5)5q+RZ,,XKL%%+K);̙CUQJd0ӏL+Rw70}ky2='1:,xfI3a&ͪx:Uu/D&)1[ f*/\2o܉p$%!lV([Ÿq&Wc'Њ)`᪻vT%u = k6+V*Ute>6/}Rpĩ+8@@&_¦RssRT#xr{7H(ՅX [F}!S<V2 S@w%TP)r!+ mӣ'{} ģ^Y2ma\CT9$Mej@"2L L30)M/ Y&}" !i!J4SP2(wH" aТ$@)pp}D9 mZ5"bǭeAu蝶"0}F部-7?Ɯe\.#PeL*Ѧ xVwӦiM~t^j[=߰E R,gׯ.H:8CMǐI44ʁ(I As,V'nT R}lr"ݳ|@ =`eu~y@zI,jWhBVp+Db0ǝX [kfe0YR*+Džp&zɺ(Ѐ0B%ɺ((E ":n+'+٥J = lOR (wy: xp&볩D_n*!r E=1B4'M%OMhS\7T"YZI]= N}%g՛JL{Aw%GtfS:I-`ɛT5 6"IIR#r^]AeYwEP6urr`<H=(KՙY9J,8RzѣwR wJ&(Wv#ukR\N2?<Հ&GY,߬e>% I%=15\Eup9r\ Q{NqW"3):G9zznW=o}"&3Yb{>:V[rY~R~~ϒ|FД[}> D>e79iI8-[T[&4xtr٘?C,kM-wm"S@K iMf$&̶K> V HǏipn0~b<;g.m6 h&̶skp5&z1l6]߄YXtOM|ܾGA=Z!6eb\2䋭U<:KyuRb  M^p{.W.;u::u:ytG eQ'_sf_ZlɽD/80Nu;3F*y0Sy~xNiV^ Ŕ(rc*P. k &9^>G㹓x\q?W? <+~+OWq_ 7m~y=ZnwɧN *Ţ2Iۙ2ׇ'w|$viU&fWo*#~͋/kʯnSO6\>Zj?7/X`? N}3[@/{Ya7W6'P[bY4'P*| H>&s9#G.]szXG~{)//^q!nܮ~5?rR|.5Z>jZ4\zÏT'7 > Wq pB$evhK)9XN_sb3Tso1?=p{YI"ח05/~ ꣘/p~u^jռS}e/Ir4`*a ^5(;)96/Z?[۳ڞYIb4 \c55x n%^a{g37\+Zג3uJaRR40h AMp:XW )ZGjDui4"kU^n{ Q3iijVRM4(?$g)4 5\ ̱U$)CXSsrbV|Z[CkF#O?=l{]\5~hWW,bo~@}So5;{Y#tVLPbnRy&1T*q<gc;PkUVc f(i]c 4@v 4]$NR͛Պmο}J^ ӷZ|~Oqiw|{_X_,/f?_ާO//v9xco[Mxwd'4w!g \|qonKL`4|u0;\| :T7kl>:^`9|vNDkDnϟ>[ߎ? /WD9`6܆tX)TTcwCe֜ma_jJ-"I鷩Y2B*iU ?A [VUt6albx, 7c3{ۜ8K3b-e#DopqJXw^xq}鱌,H1oOgvdLoq *Xp )4Diшfg2Y=h ;@h=jS,h}x, gwS ٬@c?rBuY84MW?aqfKDeOȪ([FeSIqvζ@L&D䜯G ✡4"ޜ<.qrAѓEEKrcQ v t6{0e}C!8ŁXvw~Jˌ0[O0ܜJ>80#qQX fvȘ{=X B $.x> J5tcP:ez|Q=І1ժt4[e0:2w=AT P9In͗K[C2\oAn,9SO hHSmC&IU(jF՜ NXWXJJ5i7*VZoٕjS~F{6ب+n@h9z<jQ8ŷnߺD ΏH2A7G"ax$꘽{h2ql8\;6-G${x#D.]tҟAm?[R-7 J韙Y)0(gU 2R`0HfWQvlLOhT* {$Љn2y:9׃CۈQ j0\9x`5cƳch *@LhDRoE<Z2cPW0e<0#$EdO $|lDvdL1$'ǢR$u440D0|rH%tī~Çg|V>>H)(4HηccehrH+z"$vdL&/h1nVm5MJ@R=ֹ89$f5R5 }ʕytɎBȉWu"(v|DuȘ2 T= yw*/bw&ypoYP^X>b8wydTWܶVR7Tr9YkO y|l+͖^6S*EQbѨvDbJ?4䫟on履w,(p)(~5>{ui Ae:R"ْS^WĖU祴FEAd6.gfUT u*:{˫-AZځ{ fNZSM6RlSԲ b*M梊I3FZdLvzsKv{aK(O‹P4N /l ĝ^_E#bQ\b+fdӨUӌx8}.Bgڱ1EyYL siz&I/ R lR:}lQ/c3LT<\R1ٟFTb|TȘWVRuo.P1D&^Vc꥟o^vsQRyQRRv4GbqTG"^bfgGKhQGN;QC^Yj|RƷ(ߤUa>J ^y7\jքJn-/ [P^aH9i7x)xOizNFњ>%͈*j|$1U!vs]$ alĝ6!Zg5ߏ=0{T ~<<@cRɛtJl@ÑK7&&[dx4fe^ޣ?\;2&>$GΘAC4LfcIj*6!^#cHv^kG(kTr< J:{_sl7m9q"Y$=,O4w h#wad>dڳsҜ]0M ۭ޴fX-*:> |,[ne}+,[oEeLY7fv-oKrzf|%n=LB'?^7Izpof=)m=0,CKDB*ٔI Y _?[k4y*~ Z~ug̕2.8s^GEZ ]{ m?ߜ*A|å⮗Yj^%߬''sS2uVY!SgLV!RgR<ÄD#P,vҀB2n :uՑ&լޠYϸnBon7}32VLٓ3LU^ޜ/?y;xK07 Mo'*dwf$ IGڵm,]*bs1v*)-tPbdU}s$Y@EςyΩCCBz;6\x8"ᮡTn0pjebBN 9u&ԙӢ Y4M&!Mq0V!3HRڵD2iwM`$CD"1o M_~w)\6<̈́ \5X*Fvg&t#@~IHGHҫnCy7VXnS jOknCjϊ^]f(kM:/ݥbu5uY^SgyMWQB32'%@fU 0s4O5vW6}Q#'ot)@$LѷEU-ܐ DUKR;pVȒEcEBvWb3 L3࡮5lk,s8BߑKDY]?i2S#ccr֖=S1)T=}{bJm2b"3֜ĊDSj(qiv8$hN]Y5D2X7ڡUqXj˵HJdN:K<+|պa8MS*|S4Y(SB5ٗZq*7;`gZȻ3k"`e^ciVRL;+qk \ jܚm5e Y[KSXQI;,jexfOe%挿눚( \J8ˬE+mQJB65r2Mگԥ5uKꠓS,:kDD h$4QȪmUd FDP8u\Z펵n}n  d"JMcp7j>2N֓D1f}1;TD07ÑBVYD0{3J@`,oy8p/JԜ4kޖ=32`KJWصv؟Š f=Ӯrk?ߟ}:rA/kd.KɔE|-! ænc ZW%_ej@{R%_AQ& "ۥM8is_O&Q0%==SB)VZiW-EɅ:Q+,[ 2Y[pGTfШ/5N{UeT 6)Pk2OǏ~ȼO=lH7?]&.[R$a/$\D.{ZoxpWht?+xmyx \P xϻ\pcyO_Pug_@փ0ʆߥՉ=KpRXOϨbKmΡw6\8^Uҿky"ZV0Hu"1S,6N.,`8Dj/ :iY(iE">q!=(󮃌 0nn2p[^LC;3;YWUd8XEU{vA;.vL0S{ ~< To+zxQe[|iAV)9KK=s>G* C5'*_ne`5)d ŧL$X7w Wk @ӧjٌNLC0uosY#༭*I+GW ڮ SӇd0:J(cnNg~y&04wyb:(q[ה4B,V**}ËG}_]<=UO0>,,$<~&"(n4X}mmOl&]΅7BeͰaAp4EOyxz1[aÂwX x5_FΏZ=,,,  ;XPId2|Ƹ @u/Wp_+XjXG5'\QTy\xL>n)MsC|";UKV%ɪ-Cey>^yI G^gh>zF\o? ~TH3ǰW-^K%RFw۷RF7ߏR+8гgO5RXqzz@e[qh݀lFXَ9rkQ,@uS-W~܄r.x3c?ѸC'ltcj1 dtVB:X6c>o$9.z(z2!dFc3cCPFOo&dS{7z[3h{DT|ٲ"yQЕHghWpE 1j8&BHDT}Cw[B]1w2v,@!za+,Uc_J0 )ϕm r}2^  IpKQsӽ&JTyJ/!B)3kC^%]pΔy5U)37$.dâ8IHe&Q_)9 `<Vz ~\vٷ bC/QT #`zpx5c(MC0K;$dۀR e)1Nj*/rlpug_>kUj.11G'' TvyzSvY 5kuT3F@>m4E,p<`J2_YXrc>z1$_wԘ?lEGf5+ρ-e] B/BBN Z0 i?~Q0@/)yWvbG"5j;OJRR)X z˒:W$0Kin@"D<tcUPr+[ SW>gbMgH'D$eo"ŴP1؟yNjX+`AT9Gw4}DɁнFs_`. &SS“dq'A mk՞Ce47 YYB[: dnM}_|"a[N H`Hh`VVZtZ=f޵0#鿢r]vL T.d2m\If6)&zEgkLɒM,;Ih}h| 6a46«45IwԮ徿1x$oYFoMĪ>#mtz.Y iipDw@G+꘽^(Sw7qPC;\gB {Jpr*=*p 0{UwO)*BnY$Rf枼Bc/(̃(" Jĝ{ܗRĭE|X/H;RZY] uXɏN p_뵳Ku7[-v"tۏHDvZz DGdq_qv:HNzw[.\.0Ov[1Vr=-NU-G<4znܷy ?]lk"O= "8ޢߩGn"Nmyzm[[XW[@Zlz&tk #fkh!EEO{, 3I MOBG[#ɖleYDzLp9Evt/"D>tXx;\yƫFZHC##b/(orb:En-_ohY=: b͠]'U6,2D{2X[I$جC7p%!#qn y+KA$t[5t|\NϪ˰ [aNVW-1'%!Mq˷TQ6rXCV$dIЊЅm |PqAJwHMHhD$Ba1"GF( p] jpa?$_?@,ZQǣQ*_*n]r4HmkP~RJた;ZWX.u<E2U<]oƪrKaU>'TPtΚ.Ak s%յ7=& XbXNdk4nfx?4g/YO 2o À+awSg_q ?Tm.}'|FH|{;v D?l~6R99xV\ $;;"Hf'So<美 S뱾;QXnq!FwG4]pQ Q9q1: BhwIZr BaVc1+w~孧v)\*h.Cck))k K)e;vy: yMDؤoƁ$ދ%G&2RܪHXc&Hw~w?ξWwЂҋRX]ǣ Oe(cTAOI!?tj \8%TUp*ܗا>9buԲxXn9#bO.F%GH&)LV(nrDAv<6iO( O keC!8 ^{po‡ޠ0='kXfhy#GXw;ru.&س-߂CA6|seQo(ƣA@0U1eMHBVY4AX0.D-ELc눎+aI e qgՅ5%sMUmU@nde(<g/V.f:i>

 NDs R-p云v?vٛӟR-&4<ni[˓Nt{ѴVeYEw>FAbnVV3x{/`/[)T|4~{g?j+Sw_=vͮ)c3=K-O PT£OÓ 5n}<3t"a䳼Ї,uꗖk7RL.OSz~n0nykj6|G8Ag 2X.‰{u4V5*Z[8qB SY%|濞})4ϝxm}> pla~?zިs)N/%܏Эg+#r˜5Z݌Nt.KbQvFlF%fET# -XDԉ ff&! P'׼)Mmo[C/yIYeacфM@ SL)tp5Jp-m CtrPF!+6W` 븢+6WxOCHAS ̦ TC@CujerEj> ܀"'U0)|duH.Խ5u8um04ΟSc4x"M^zo!X ŸX-?J/SOiIx?v#PP:M8NjO͆mvcd@"fч)a0F=$^T57mCO-6nSM!tGD9JK \k2'K<\輍fv] G%op[tēk!&Hef۹jT.ěnn:_!l$$a0$܄l$AmNl_%mϱlD}AB[(Ce֧ F[q*$jdIVGC Ynviݜ0Z}b};] ,,u: A< ZHQD Ѡ(El,*`2X1Fa;̴ 伴7QJ=!5εA;Cfs:*͈RV9ld +{dӝ|Nga'߳Yq'9󕉊l ', qI؄Yg FccHccNiekoŊom%>N/0ބml ,#cM`f 0 "(+E%xA*J=`uRtAunFo!)ڦ4Lda$,5$"<;M)1),Lɤab:+L ^u~Խk1iy{B6t)&"j{nq/ i1KReX*MQ Ty@]:ٱs0^QCƣZcNscᣏN ^1mPwcy4 f\M-WƛnwY޵5q뿂[N h.=7U!%ɩԉ#e;;3+1@ϟ]\,…l$8ӷf6y}j^|E/+&ЅO&|qРF% r->(,ºHm^_Jd:FUOd<\Ph)Q1#ȊKv{+ǿ0(##5ܯ̡HBzs7ގLWݸ*긏FlU&n9J 2XșCo(%=%\ɡ ^ۯG5q2ܸpn\ 7.\o\(.Ip i+,1,MN.lI$i9qŶj5Go1"4}3yirI-N3 ܂3;g,0N{qOUO`[QF;zutd\l0.Jh֓$ jFo&5 wBf̥a(!JFC6LqLh+q+a r Lӊ÷ֱs;n^TM`>v 3K,Qt$Vhq0&eBK֕"7x4[ˢ:YzZ78|]\2OO]6>6+3BP{/˛HgzX9%l~[RC^_`Xֻ04aT;&]e (M~{) @oL4d~=S^ia I Q͚?Vx /]'3)hME^tE5xIR3,_voq޼Ù{[SG&-dd*Q;ݖ{~10pC2eF/,h)I~,n?q 6Euc[ ƶEю_>ǬP,BF2j5ȑ7 IS4*& 2)'N4#|4.KgO3f--,h%r fKU {J[AVu,cr:R*:5e hU 3o#.ad=#zMLZe0Wiwד茑6miˬ go墂A2e 7 >fZA Ff;_ . 2xIqaxlA44㞡|./CWIII!Ғ4 фH/{9`$`'5l׌P}I-hɂ݁Tof剳r!PMlZZQ%*eJ H+)K aDm#adćE~R1KUy(8>s,QbĠ+A`TьS.f`\zfJImƴMCJB숙iO=̊ 1 he*$ނfS)0T"OыikJجM1L^iu:Z?Lq`$%H!Y:DɅ1- y-:㸘mnčmA5nQU0S aoբg Qju$P P$h4 f$FtN0AD7P78J( u"П4P7&ŀꆾ`XSm0ET[(.նDR{Ѧ߬݅oO`wCߎ'SroשW;9()UL`a3Tf13ńჄ̤MrJeٌs rYJt3A*K{NR)0&U‰8Ga}؊lI6i{KZ1qTxiM%qQT*xBS;ax QbCl-њf^?pjQ-E  ZJ獴ġ36Haju&%.^cD}8 #̖&q8&!j8u iDN=3AZ 'n!Ieh(LCdaX%)pmv>oINa:uhj#p% X R esКgd,xY*йɌ+TXh9M$~;jyY9af<$ W;x&u!$2Q1*ofB+ e!ɨ )8ķ(g{@j"6j\*Q<^E1Q:e?'3( <6Xp#t*J3n>ecR'02Nq`n(GkGD3 @bH G*M,I2TG4Cd6`.AO)W&D=9mac-@AAJ!1vT4@#* p ў5ඔ{,tcf`km[( mW0T9P^׀X v%l^7oKs0ۙ.۽R,PRNoo.K׏ָ/YE]sm-!,g&RWc?$Y.>?{+DQRZU;H2hM.f.t8˦Ǎǭ[t s/VzCYz8>%!IQ])Xh = '3dvO.e͸E%9{74E^.I'/򥨣DFeyc-տo _b$Sb}_R}ϵP=\Qd^f,qy~;Ѽ׺ Ϛ^mCfsl+y޻|Tf쏯F2jD)MIe@mk vquZ飊򔬞P›>PCߊPM m˜R-VnB׽>NA,mAv'ݶt3%n jw$RW>信0ˉWv,7S e}r(bD x/v+JB58}A 4{hZ~{HhBz`.x~,cPaX>O86cj$;3bDG+'e86ϝBh5Ɲ $.;?;|g䞕 Ȏ}ΐdRut8}#Xip99I01Iꍘf0)jҩPSpM/搶|r:@Hci"|N e༑Ow685Py$U8Y Dn7zu\Ǻ~ؙ#H}#AtܞCHE;ν @q@]}㳯"(z*Q̏7Sz,'Ϩ({AUm Yi8LdLYܔnQ>$:'[Kw  F k\F)#_.s{قuKہ<7[! fZr9.Q' Tw -3:<9"& DQ5V˞-ۈ٣2t̬ᶠݐ4W Fsf`Np-Jo4IsF~.nXqe0Y)#SR[ o,7?8߸y,†{\Y>Y}oڊ.M@o~>nj+aF=ǃKi'eeHϞER+O.j2=W(((UFswȔ !+̓>W6J%w}{&&}r=x-9Mr3UOm2]6vYn[`W7S2az+krF*GE-5p?TsI{{N)4J)ISq#i1ġ ,qz8, )YBuzݎj-R#%,7}s;Zsٻ?;|ݹ |Oap";;uΗaUy[hzUWjE͢pd,U!I@M 5q<1)4IUH1Yk1G8 Z|,'(hʘIo~Vhpȥc?4H$EaDR4·L 6}0J{Ο.un>ML"dL^< >:29FKJ ^6ƔJyL|RJK`#bF8.iLEDjM2'-}l}d9Ek@W;o6'O(FW P5b'\b#'/(9/fh}8xRb' `VVYO'x kxa^_+Kg,O&ՐTĝR]wZ~] 'q"5@Hƽ˔رK]XcP( R8׾PXJuB sFG:`atL.y6Cfx%KབྷCȐs@U& i2F%D\LCvT+n{b<1 ɜI.*#u4A)%4( •&izq3f[T =,n'qM CЫ+y5V ?I<zq%mWc\%wj))\rd}* 3 X^y>9_04>L H sәOw7x%C^\t>L^x jaƩ:*ZwP2vV&qَ`& ōJ3!KɷN B/ Ƕተsa>` >@/<#gc^ľHϟ @[Db=mֲ_)ƤD @|զ @vz'eYG/vsU~>xuvPX%eAzѺ{!zL GU-L2ש{}8rC{1箴bX-˓?HF0Fer;#ʥÑ7<9DN:bN7zM<0{ "bD2FCW0+#S5}.~>G7݂iARZ@W~Cv,kL%h i h^U+zi4Wh4:L7iA| ,L y|,n0 ?`Ku dPTAh)\\ƥo*%'h(1eR |*3m1X▣ٮe̡c7.sqT|i\ P)9dLWfRb 2@Ys3pfW^Vq=t Q,YFHVR:G:JD6ДљKI)1ְdL2(ZI@=H j]E(o{\Oh$!4:MOUJDl ь`DI4a'#c So Z_S:F'(0MH1i9' CS]b-yiNrZ+0!6$=(zNYW.KiQ4Uݽ|'#qs3yz[\>,Mfg|x8W;kNT.$y={Sz`7j{<:klh<.?I* Bu)*JS~HO,>NrN9;\A .U<=u\^CKed4T* Gꀕ+h uY0Iaɉ1V1ҥG)'̼qGfcIqj̫Xhpjj> (>|5trh6^+Fo0.}?LppʎVL{OIp7/Xѷ60a,20x"X ya,HVLX* KD^Z 0YJ X5e/ $&\r?+BWRw4da-tX0A?7須T%$O4 YeHc0`CN5%e$\$J28Gj1R╱AI,;LJE&M W)5;3 iyJ`ډd[kUӬCyȤ ZZ(MBK>&)d )vF WLqW 4ƺHh!DrKUjSDs@#`LH6AkN`z30*EpF[Q۩6\g.Y%v 5<|?23(mŮg@W`$~0GUy3w&u`By#?´nwW?62q#l5M7/{eaH<Kf?q/,0p2EЪ.b UH廇aCE"5<IiD&q̟^`!Eopu ̎P{ l㬙v;JE lu3`>/A,ٝ5x&CӋK!zoEVަͯ s cC]D(U%ePg&kVcG*qmE0qïƊ|k5.dd{-,"줭1Z|䢾Ц0וE)AMhjJzG.p6#BUն 7"}u0/L5K\ T N%\"**[/|0DBnZ"RISLr)$ՔMhfN`ebp 2 e*т"̓,c7wDK_\,07 !  a<~C$k3 qԲ~I Dx)f3icaQrI"e0>VYLX̒'e7?3/vaX22Bzhd7lZ~ ^m`eBʶi ??p\+~qg87{3d;1={/^4ޗqhAD,ÚKJ J%1%iZ]L"eϻ[vz:WKvtC~. /hJ~M^Z Lry:BJ6`ĈeQ(A)e+er:+}1c̥|]5(n=s6/;`ەj6_'<7 ֜k@iDU߬}O`9ޟ)zfkkH+Zo{)RAU J$RRTY3dp 3q皐if" mؕԐ6KD`rMS9EO5*dmD Hq&8ϡTd#RaII/5RnKE{]1SCi?Ѣi4!Zm?>R SeHqp*ig%bp*a#U ub,MnnOX9 qZt샪+P [{XQaa&nƞTG'E`L-ܪXWď=xa·ix4o,=\4x 'w`0 oon0]Z<8ӟ<ҹҞ9 'P~_ ػY8ꀬR1d #pwe9x XȕgU s²)c$:$0[˕)?s81Lŋ-&ÎbaP!i!#yt匊\\Y +a\%5B̳ 0tAPV,C%Y$ҸLr(b%m#Kl턯 ߛnDV#=T* 3XKTaa,7.Cit$mOS m$%pĽ-}|,qkUb8L3}0Ms$5"RƼKJUKjciSHJH $uRL=|՞[g%~qNrY^> vtqb!SDž/X.[;BȨ` c4RG̑oĝ-}1'wr&~."QdgYeJ薝WW;>j`H80Fש 3n0:U"5J%`%%Nh̍0exK,ڵi\ 0UWb"fgu_X'ԡn_wA'c^jDgYS][o7+X9X`w@UVVq?ŹH=7 9i p_*kVzqZXG@cy/r0ˋ~C4]@/3^I6,5xgI|TE`Z8" }5F >k&!T #Xđa Bc@JhKm3ϔCȤԝ3u >r@޻ Ioc2JnOfZA#Ww "-=e pc&FIZ2~ 6v1Zf,[LoBR>.3OC%xir}ogա30|s $ÒеU154{bBP`|TD4Dx5! plq*4{'Zc.]z2,CfOP.*@@]wϮ=N_j'~_ݻ):󴁅MpC0hQ Ɉ 2 IO򠄖Lha2 ͰZ1E9s~?(KǔNiX-)|2 n7$H5OїO 4ߤGG?^~H,n:> 'x9-Yz͛gFi~L]+~f}plI-5d>\"!|]C8ny(%Lj7꜈LS1JiZbcQȡw rj ^g\X@EV(,YAGZ*eQ k*HH=2cNy*4]8{7W!ȁR)h<+)pDZ RHnxgHZEuP6P|5!3DJPTbϝF5QIP`j -0Z-_VlT'.Jkn*ΦԧPA'@/fsf=վS&Q ´H`8J X AQn`>]1\X ]~LmjHiuI]{7?wt&ݨ]3 %4^m JCN4pP51,* D?1`x,<!0 @5>[=[* gpr3:t>D Lcf#^a aG:Rdf+U͡<0OtIu ϰ8c0itV=vB} )n_LhXK޻XmDZ %֧{)u:mWZ3M:MJR"%K8ƸBmˑqiDwCĄ˪dhFS-DXK5RPR8j,Y9 [ x@DP ,<xUB**n=0ÒDQ͟K-Y#bGG ";9Tddr&!joFvvv.U [?Wcj]L7f}C2p_ϗӻФ̌DY6`kQV`zpP% Mʦ[q*;͎}? [8 ת*kQ2 8M͝۰/D))5t&3La< r; C*"ORicJL'$'5X  ?odWpZNu"e;#VO&rtZOE- &vL'+o6Q'jJ($bTb~0g}v!Ax`}a~^poVQ*3alxEFE|^އW~w?QN//<֯ܛFQcd.сjD&=0񷨢lӂ*(L. cr8Tΰ:GЕMQζhAa31nfIGd`Z nw, -G/kJ1;'LgCZ$웫` EÑ,^8n7'>ɫKjWYy/>\;RpJEޔXpHsJt?#`o7*zW9Upk*`L| ǔ31ϠO,nl?~apLg-+dZS6{Q2`-Ҵ[8w$L#gUi{O*GzOˎw<=(E5Oa ,~y6\X%h./r^[:Ҕ\ݽ/fFS!JzT2}[[&WɚvyNd7]fE'_&48W-KǢEeԕ b8ȳE*S3u+IqP Hant GvUՌ_KkKwk믰/Y+'kY+)+f "Upf{  N=%9h@׼Fvڻm{@FT猈W\6Ƃ#1qiTt*"TQYqc|{Ҥ9M?gDnf6;u>^|#.DGw?X|Sby3xXn.1Q9dB!D`1(E\8+џGgoFB0}т"!F6WZP ޽0L=^w:n{Z]B"(;"$>u ^ љS$=%}+'8P`(A881E9C9҆yCA`V@6JdMF NjuXjK,2Xbx+Ȁ-7XFAd ~R L&I3S0,)bR"QD:ŅFGlTHӘ8P*akPTV@D*E ۫5ӡiۦDc}5 (Q :W(QQҵVDIgrVle\VOƚiX x qEE TTX:2EZvhS]ͫ-Vҍڢ.j²DMՖJW`npz۬0u:հ>y{}2jeU X앿B|㥑z?_fEB)|&j?6BsSA5θh0eGIq+M[غ3-PGdgzx D2%E" g'֗S%es6GyZQ{\ >DszVɺlrlۙ(r$$NjhKDeX~ShKD+wAzL/aX9GoWE G+{^M2H4G`oSHn~n:8FLp)D'yU+7Eۈ\R6KDg 6D! ]UԠwK"d圊nܴѩJe4TXffᨱ&D&y^x@Gފ ˏ{VzlVg|=56nR XX9"dSj/3)2Ze%Qy멵aF|0H5 QQso񮄊 c%ґf,0 63n,HΰU)OĄn4nnGL./3}=F'<,e5" $5LsI5ͣ?V˦K3Wr^طNI =(Jӊn'hfoc"Y?OFqF yZ&)};L>\%aX^^"bx 8A}x9ZHa ~a1tף䪹@+n\- 8}idN3,aU( ΍[iV9=F3OB57'h4]i4EqB 2`>JIaՄ5'(uGGܜ"TX 5M5sGuk+t`j<8djg~i]A' {K8Bf^Rfyg!fT/fkW|N/٬ƽ#Pf 'q޺wenC| `}oĤׅĤuԒ4/A Dp(0o&!kd(d%,_9-H]T޵5q+Kj+;2@(Or>)R)GGլXLv_}|eϳ(% +}2$J2P'G۾|=R-Bv40SBLڽvJX}ONĆGgfhq ;$iբp׺w:(#p;$*Ad(몜ն-_>AQglw`Do*.&2HssO>`OvM3Ӿlu/pd X D%5涐ͫ}/t>}ʏ.l8z%h1z8Zs%2뇡m |Cdq"7qoaiyso!?}t "jG eHv?ZXؾ88$.4$h9^\ȾhL_Ḅlnjd]X;zTe=`8eHT3t, 1Y#Z1#G_){=t鼄wzp'5_1!>rM wTf/?@be;X wH R~m nj0DH^:f\Os+"3W%BJi QCEoP7_Q"oӿ)+i[RZMtv8ugW>#Rt63;s;n> z1]y7E*KgT }Ԫdo#[p=4[=-䀥9|sVOv1ʩ(72h ‡'駨ZtkXv{-P0޿lniSFܗ~-|<x {_N%N@ӣY+8}!{ŢvH}WLd٪ !6UĄA?A)qEx#)9xY)$SRqN؇Ҧ|J+ż,|Yen*GǴ1=Braк` E##M4(Vq -OLA݄xxF S hBh3fBȬ+T 0TyEN 8%KQadX=z,-p'nTHZz4Y6/WeO?&5ʼnBkƯ(l19EO1[dx ;s/A[-'EA?W+t$D5P tƁ)i%ljDŒE(Hq҃P(Lk䚫r%ut//,Em`+_OӛI_UՍWKɰz@;`ԙ7Z"t溠-J JR7?:_49É6+C{ vs7|ݜ|Ua+yU@OVeeCc硱y%. ɜ vb5_/1qG5(۳yUqeijӹ!d:hk܁2@B]VJ̉6$ 8hGal,.#Dwi\#$sL/ʢ,Q׹VJ DأRkS`Zp?\aNVw!u|>آHUpUZM.űj*fW.1,3v hŰ^ťa*#,[iG?=lT_.q~훒jߞ\a Jt.l+>FN$@qy 6dr5*kyThާ)q.>yMKWn{\J j/T=F^MBP5SIK)j@FЂ0eT5f`8Up,q5uA")&h#6h5qĒum֗JRhk=|-8gbmw3h%A~[[#8JU,9U5gQqgR˨ͨwEm*riu( hCj1Lah6B*$WgתCp&Q]hnˣ3En$ɍj9\ Y5 + s@̶%^|Ίt)JRvKًp]]N`bl[+1_etkOpi}@ȉHX)#@|摰߫H2A޺w:d'yF[tUfv]i֫!PS=[TZQn7Zis1|79*~2UJߪ QXElq}]CԒ?W?Uny{+&9.IRPZQ"j%QrHFo_\s*mT]ȍ)lL \~0V+/&ʍ=u*ǯ/Fyn'7'!ܽDն. :yN_\^/MS拘7XݪV :.:o\Old5m=|YS! 8 WLؒgwe)%'6l*o%>AK3$x&(NF_>r$z.C.)DhLS\JGմ  .QoZ Ը9r!NA*d PC7?_ QwlfbGj/@ jKyʁPfTŜYAjCQf])˵'>#yJؖ@ ~]`)9%D+c _(!5Y 0J1yH:F"-2`BK2UjE[2$0\0:#>o@zN =4/ yqG7,\S斖,-)h8=]EKtzSe&uB&mz9yW iUk?̫6'Ʌk~@.a0Q^q-JZ1-Ō'hp|Q:gNZŷe.5!ۜ2F\] ]F5~1ף#pMZg(~m@l;C(_axm((*4dzIy+s)M"Uu{g\}N/˳/ƢX_ ՝ѧhziRI]ƭA) i W}>Z6oi벡DZp;Wjǔ^W;6)q5D+>q$Å8Rq!z<$`jGI(Li^csE</ [JkB%>%tf׍Tw8N<}5O}cQ!)]ML~z 5T^*_fg. :sp Q@E.hr&JmcnOx( E (q y4TvLHYb0WR+:D*v,:.7=b Ƙ$ =Bȩyw5PٝGޟF,UO#"u!AWGK)=!裥Ӄ]Lo^n]"߰xEZI&zEEV79c8k3d68) %`Y+S4C{Uy֫/klݗtBMW)HA(=Q- ܀R.2 p!EfWg_߀&+2_w lC:U;= ѩ@+ZJ-D(-IVB$qSҔQuNnԘZpQIӞvB SƚP&Lv\};geU vE/${Y~l O(5Gu`f͇KY"tds$7+_W?%Elr9 Fm;_Q_nIg3#]U"kHI䥅3n2AgE!+cBR>p" /~.mGO6nNm^l#Pad݀ݔ-8΅/{XrqYKyК6)DĈ Xm"~O/w?5IL} B[)V$pSvW"#J97E[\r~滲G'7OƳ:x=5]ؕ^9QYgrN Dd X Isg,\u!_#C{CCjЫH'}S&>zVgr؝rBȟ7pLutR^FIJ!`R|_g;W+iIUy\І~ e9( &;M'5]cFhX18Jjʊс?Fz=Pkpp 1 nL9pꕂcpX쵛F`B j ޟ5mkP|rgߣ547x5+SlDe5'C#ߌ0%ͥa]^+ 6SRyF 7a4 4cv 6v젶̨ǣyRZeŽCK~Md%R9C3@#º2,5_ rk)4[\஗XH kn22SY-pNNdI"(<i%NLZEbzef ~>x%a{}v!rY"e!rY"݅nA8:#I"g2;N Ŋ9L!Sgs'ƃ.K[}[ ܇9-9vsaY*aY*݊.#:g,Jz4 rC4/cRΫ4ˠ\5[ևoi`rلZu=o[pe9l=K`dD {04ZCz'cO^&p냚 DBmN >7ٲf-K`fAy>o@C! (sF+6 qEYܙ\an}PӠk}֔&gAz +o+ږmY*ږm٭h{hƜΚ\Z&C?S)2['L|`r9QIaqZ.S#(&o EͷYe$0"Z+)GVDn5'5P=>=h\AunBRt[5#98kG5NMawet<܇fH GPb%x*Fn_p4|UNFi0\XfRQJP^@Q+$]+@О i@KvٳēT+0y؍Ef4?w{-Õ{w\ŷ/$x+\-/<s_}uWlu+-~N~̭/kuZVw79]_*IݷO޾K)rig`:UcMoq칩`~n^mM$bQwDzdyCQl6PC)% )kP'ET4vg/!yPX$/5\ !5> OZǞ* }@@f_~I$Ғn/x9L [&x5p&tcMcL"9 HkyF7I4~7H])bcݢsO%#lŴ4SGI֐Da^rxAY1 Bn}752w#+#Ol_nQH Y,wH_nPҜ/:IW߾5@:$wI3p#ʗ &-aޜsnb V4ASPRbU<PّƩ|2J !Ĩw:F pbXFsƂnKqhYqӱ1L`hEx0M4dSDDgu$ cJ= òe,FJiG`}dAx MИ-XF^پoݛأXwV2Y.p߶(!CL}"5'G!`Jʺ}⌾( j8UKӼ1q4;u}QhЬ› bI <*,)HQTUqI%0 p(:3vuM"y[Z!S>RIl"G<@SIYhU Zu8XoLY>N2)B兡/!'dQМ` e4Dx Q)uIsahi.8Ů,.PKR*ņ9KO@8ٿ9sK-Ssì1S#@0@ 4J}XM^Fs3nC.e[whgMv2$ɽ\ )CuGm~P =4'g!К᳝ q_ݧ5ȏ/~µ<5uE䧳oV'7*˝}+TQϧ­*˼ܨUh+UhOKt#VsƠ]b<;TF֨v>nxUk-l;Ш5=ٖ0=rߝW].=*o_6 O ivi aN)?u=-6D;ϖPl.1۵hd}gt0"|o|kzu6ܼb ÄQM wh};7:\-5w`]wy#֌y] Af7\eBvs;ُ\=5O؍whdbOh7y.<gi\lELMMIʔ <؉Fŷc+7=vc'0yӸh9̾koTY-FUOy Z3VToWgWfV5}ZWrmA}>3˞Kg=2mf RN'NwںnLWd|n&EEeֺ)a;E[WL {)ښ5gϤ"LZI3)8z4:1:8$4Ӈxh7"pe, C]']^K~:/_3·kVL˕a`(aR4]˱qIi-q>pI4ID@B9+'NJ-2XS Ec'%|۵e8G%ӟKz2ra,#vs524cy#ZCLTU )2GMS&OSʱ41n2/Dh/ehSA?P8€Əp>w&`$Ƕorݝx>lHs([ukO91'f,gB2>\IݠlUAF2cIK%y:dΤv^@sn"h&%rmrL)updyȐ1YM{D:p0Pf &Mf) e/ȃ8%J6sk+`2 B=V:7fAv"Ԃ\DkeQ0%iՂ'D, 9A@s˺ 5lRvM/g> @vAPqMTq\a͕ c6~~R*w>E!B1ZMϞAJG-2F9XIa~Dy`ݬaoM5B X+?Vwzw5h~"DjZVg/jH}dpd/.^zkoNW+e֧UHw.<1'eß,eY`ZeY`ZvoNˋ}sTz^Q+ŲMo]Io~KQM.KtY˒4]v B>P}a@4hp19]-#y@ssL1lע( y`a&-;4)8)V:vJCe@!3tvm 0u.$#$7ɻ|3*: 0q[#_*×÷Cmw;;E Qbƭci~Iqd[%z2MƑ<<<|xĭrN1H8vm>`gb8v)VwA+.o;pe):X/O_mٳQfF~xt5C=%ىL$(qO-tMN>C8d]U^ո[jWіzȰ7>}LllXޔ}>)*y]vr3Ոd#EOJ^ ]HTKMɡ1F2GQKMɡHOJt\T-?VCWFၮwZP^y4ۂ=Blu;(:5g&UN ˘ br{!,{rZ 3kt+:Pm]XKU݊ +ms$srۺ`+:)Zf:%:tD֧MS3sxr F&W^-UW?`9j0pmZhG2S>|sJ9~GZ+d4Ge$N `;OPC{Ow6ӝM_;#6G~=syjɐu\dp0ǗaBD52p 4Lէx'K0/Eӥ8u 姇y|\3C?d5ןWq,!? Nm0k\a31P2/;R6_6Lau鐵7/'X~Fi7pdbo]u=l\?Adz@X[pQIGʩ##Xp0VcBO=Sߢ; [Q-ؘ}hzطَ syhW~ݘ"cx:i_TD._=|X6SѓcabäH2FWD1ʤ`gfJD$@'`XCL)XNJAq'T%4!FHm'I4Q! C0C{ƤF +fs R[HZ &P &הH*-| &ar1$ʉŠt JYvǯ9& S +='6904px34|\@c*Q %paJ6 Pv +UL"e*Ƶ߸59J迟;wkׇ?#3۱V7WoVwsQ')h&FL %n0!e`bJ`1XL.vgw\}륨‘BK BHupG5ҤDq7ѥ~1_' 1t'Rְg{6l4ۯ8^=Lm4<>ycuYcǩny}͸4/J@HGNCoTvնiaTi!)OFܥ"Y EG]U:QfΙgl2 Dނ݈) d4TKlʬc#"QJD> %F6KlV[6plV Ez]'*HqU~y_%Zel g>PۺSqw=P!6\)-{d1㬜;cCP5U vFF*3o˜SD]ON)v6Q* +R $k#;*AjT'; 9QƊHtdfD6^I8?2ZTL|HF~wu=n:ݵףǻ39\ho~|F E8s/qM7kgxڴ3Ob1Av>8k;o<*0yb-Z&oypy$mύ~o $)NS4:ȋiVWڳh:eܭ<'Oc|~2k9Լz:3k!W4" &@HB qmEQ]]Nb `PI ub":P!$1A-@z95cM7wf\FԞaev|X Ф>n%-I) $8$1tDCF(O?NbKOB S 6KıhKW]qSP|;㲿y/W+`ھp{y1?jyVWʞo5E,S԰f)L/ŰV!)%R }:=Մ8 b@ǜrj2B @ٵڎ'4(qaT6sZ4:rEȤ+'rՃ^]c\F-꼨].^']/aϕ1^餬cek[x,h󵜪m76v*˪7\ַy>}zٜCK*Wr]JDnBjj' (#o WH2NͮƜper]>Cld?n~-3ٵ+ڝm|m*Hg z7$t.fUx896cq_ mpIwʹqB~L a\› 'rxPw dzuGmDG"!&& 8JwLH#)RXkB"DhBI&4 @=(|iSKYQvrTn]Qd֓>Y \)}Og(ZJx)H+Bz"m[lXz2=f>vA?&9,齝kC) |RbS>8B" o~`u[Te52jTųY)_O[!ijisP So!փf+K\1]8MscF E%uX6V/~M;yHov:uT:)5sRWm/Bœ*F+%T WJ0\ka@g 1{o:aTy@JaD*%GԘ $ܥ~&M#`-Ltbve&w]`ϻXyxnf[XWdvб $& a&I`q(9&Svm"?Y:ߣdys쟰\Bc;;ZTU>k2 hCD>}ĦV!ue_P^"ۡfZ'5WAjpy#Nq8=}$x\^ dTP^Ql}7\8j #ص7 Pq&8,yש4wWkuqVg'U?DQ1bB]BRK~AQ)ܳE |>CH:K+;!J!Ok l@mAޕ:_V])V $$.r_ ,$ 姇Sd|q7ÒޒH!Yh8LreDCB|OJع%29psN2}=d /!p6!nSsȧX~Ocyud+aۺӨ6W0u]yw) )#'Xi:\Ӿ9|4 4r>n:*IRu|+ج딫XiM/w uy|+CK(P$=H*}͚Q8vm[O+YkdbZ#Q C$Cc3Jr/LUgg9hmiy@P _*3D( 2yh.~2<żk:yJ _D= qx T{b2ɐBxG)w}t ̫ftl{׍j(o8>Eb˯Qd~[qW[(<ŅWKbga_\&q/iقu%oJ}%&&N\ _'#d&8{=dV ?'?#T|2fP=WUP&5~5R*Q~ۭ\l=HAtk )(m(uuXurX] +c<#}$$RE&P ACP+yiW怓Π_乹{wy3@x-&KO2*qۢ AJnYsDT$;_eoD.\v|y9w[=C+33@ukO8O%(% :S",ܜ4(N:aG=pBQW!&睫&w! lqS_|rOS%ʦ:dsEYeNi99"ŇmӰJXr~˷NԶ\R{C|R}*k(8l](n}W;t#z{K)%´f7[( NƓٽt }(ƼnꖭL\R!c_eqRm]X^ QG2K$ \ppWu0K Za Á[D"bdNTl_,7 S|'ܙvpE-w~n^ځEg4D!v˦TǙ 4hr R@b쑽NT4Xo8%<&DH!]uhOK2Éwto2Asg{^}Uۿ~pGn훴:c)=_RmeR(qfcH 3/Z79P07e43靨6$SO`܍l0a60Lb O>  Fގ!wakgO>^t8?;Z0>L7ه߇4eF0e|/ L^RR͚Zp-x`qc ÊQzisH5#=a#d_+UBiVu/8۩Z&8իZֽYlt(ip,a55ZE 醗)UlͳwOBoӀN@+v9^|j _FS.bkfh"6^Ms)u ð7`FQ ndH(S4R1C)&#xRFJ/tGzwy* }z=ۿ-,U،'C BѷBmBVWn|*ش]XZֿ=Sf K']=.>ZV-ec:Bwpt b%FtwP EM po}Bx>JPUsۏq坥0^1C~lsI?j۬%Ft3|/T@bi2/:vwh/ʱJf *[ӧܝQ͉V3Z?rᲽs:LMx֌?GV +gI80"±dAplK".u+ /QȽp(M;$A_]hԽm{ 9C_-#u/  ٠-a㙌Mpfy3Qk+<Q.f'J"xE]$ܼR[> x iLԤ}uM1洙q .ghwƣhk6>vu @LfA s}ՔWq!ēX MYSD= &lX7~c/DLZfh mgӰb%>c 5 '3oae"=b4،3 uK|ZlD3K6{a-'P?뎪<9etm\HοP&9UKZ4yYn4V>u> 3oܘ=>oc"RQ0'z'USJyT5CuY]F?Hg/_ւ;EX;б:@ SV7WV , ^!,55Č\a3 1E !ZD*܁U9@ad˧:8 NSZD&")@a٢3yœ$*R+`j _8#{QvU!OPYcWyP-( 6"CSαhTA0KF50_yG'RuUUS->=~Y e)ˊ_bT 5٭#^NA3odr`d^3a$Qp.wFF?l1|ϸ,8fVb % "RbTGK *P*R5G &0iSU2$1"2*[o1U'ikT Hi+>pR^tP Ec[ŕB2Cnyn6ܩD2&͔A*(yH \@Ccc aCL@1s̲7c TM"RX 3[~^o.+8z }&+ɠ/DH8S~E4pn9Br}Aa)4#QE3G e8)$,i)\ih &^QBΕ^RCdf;S7H}+ULY7>w gWd:Y s4( kvQWద~hMd/*Z@ۯڭ3]y;[YUxOόh3ˤ߇듀d@?0}8 HP{yMK =9NaYai ZS` G5HK ӭ %F[ZC4VB`y9)KL#LDVMN9f$|朗N朗Nq8St8W%69BD( %SL) 2 E v!&DKTVmLewc)>߸)rbA5륻j ;I=|/+V֥3 K^]C]rXco[TIucn]G B:x7ft~ kGM<:n!l8~\r9[GYXlx}* 9t/~ fx3_.OPy3ojt-]f\2Ż?\~X%Emor?_~MoWV`TkXӌV+\l.'j%O"A;;Yv@B##i森dxbK|<ͻ9[0-q.::.R Tƒl'Mt>L 'I1']hӃN 9hQp NF_c񟒝]9$#lXspHo[YXc^,BLI68z &M[.ltoi5j13uep9k{ N^ϿRXg ixԁ맓 ׊ۛ C;MٍO+~i pxݾmt?ە'STl_~ .Ӳ14? i;M}] @/|k ݙUޢ*R3"Ox๑ZSRmeR(qfYJC" @V)90qĨb鴀E8tH%s c5(Q䎂NY_;m)6u ] :O }Ag(O0H/Q`G8OEḟSQT=BFk\˿~|%)4JKY"Z{wVP)9NKf`( !gEt-T+sL.I‰8GUN5tP, 5Cli@ c+QC"- 7EA7f @x:62X9d<E !#2P;a%$)B8 $PbgbjSXj<̡w?CNahNQx2no2ѽXu`Kw.6vL@XrDc[;qCC<)R+& ́KyŠ!%%_Li|#LSk'ҩ CHG5)'1 ǕLO)2&0®?Us8&Jk1Řb ץfT-5̜9˲Kn E^0vO(*]("ho)X 5YVLY>xؕ2bmSQNrL2&ݬrA@ɋIʺUrFV*3&9H('q#;/1KW&MMZYa},w#dk!'@fum1UWiL\<*Z[$"ez; Bc=͌Zxs=".MlbLÕo/,$KY]oLGDs9Ź)csj9}͜j+i];דQR\UL&}s)3u#mjg]uԎ&m+3IV?­P;  c|cKxyܲ!T| E|U{ED3S) ڡeOuJp4%gJ*Dz4r2U6mUShULVTyMi?vF\dg}* ~y|N rq-Iȕԣ7K<$TB) vO/aaCR#=C8Gix0F;#mz e~ R9} ;r9NYn[eQ)ܻ( ى4n%7RvIDPڐ մx|z}}T*7NUbͳT]NL5bl<)\RAo.r@p}&VVO-%\| ;5 ~bULdEzAPʳOyC9[Ûk~7CB;(WSa4Ѱ͞>{bP" dtgw%~?e쳵5t,(m-c D^x|": ӳEj3>Xr<-Ǔ.1K+!U WO@ L`!pֵIG !`**$equ!ꔪO>ؾqsLe xCTڱm';لPJX\$P0׸_֍APU5Cfl[ka D3^rlCO XƂL "Eh: /.q\&5FEcb>]=-n &;8o`y1N9?hf],$@H- )g?Lbs 57Aҋ=om\w<n A/$cBmbǴ];yw! ~/h69~0Gs970fDRj!&Z%EFyAQ3xD>ζ1v4[,zS4囹~; fS o~#pR\u?8Dutz_2M_/ $qb8' ,3~csr]x|Ac3w0tw=uu% ƻ\cnW.p/`^W+ʰx܄ wڢ LW讯L|,A;>_Ox|u)'^˕g|?e#Fٯwq`ÀdqC/8&Azozģ}Obm:|L{3:|LwAsمv\޹ipdnb#ٷG`RRSyIn?E0:M |fR<;}ASM@M΍ۣI?Z<̻z,;~> yhyv Q28iL鼉y1Ɲ]*輱ǣBZmjahkww4n`w ǫw$&FC'G>^Li}?ɎӠo.lrYfA%pq|ycQw;YW7{X8{JBdKв'?{Q!?A3 dqVIJk\ =sjK˪$Y:xbF PoG>0So?SŸwM'q7v>$=x78>x-~> =U ɠp_p`AûQ|yGk&z _Y<4:s.'NhQ>mz|u ye<M lqSz1$Ǐq=k>m1mwd:c zEos^aH%Q\|8`*YSBL %Tg4K!7"Q4uNYK0_*p]2z!c|wBy"PdE ;'>u~~eM择܂y\fΕBR%ټͅ>K{=-/Ok^MhЫ̾}<R(j0jiꮘW>)7 MC찐(`#cj< z)MeR5u&lJ- %/ɰsg΂xIs{Tgz l핵 ,],[deCx01qjakE6?޿ah#%2Ce-7#%A`#aa@A# /qttw|vquMOgE$έSpRn1ؾ)fá74LKd6e(Q0mbn^]]\kBu==1pܔ2_v(1fj,X*{RH6,gɫ;!ܕڽJ巜nPDA;@Nr BD6l4b,no2$hȵ rһ v4yp8# Fri&lD1Q *ڝ5mEw/Y7dh NA\Cx'k>(+.QQ]scEi,ϝxSū+q7*w9pøe /Σ`Cƪ[ڙ&kgz2JJVmFn%>UcNKJJg>Dub1mJJyXH1mGc%:Vn%/{I'35/$MV樓xTjI.0\\T\a]GpHT&uV*B-ua.c4U縵Jc\yU*VyH(YmtE)ެgվ4&HJG ܤʣLq ܄pp2Q9\OFŘh9GgsAk>d:m ^!)QBEmx;FeZ?S`.KjA:ǽJ`FJXe9VmUiQ)&ےe=UATrծJ1E}@>J5vRp$9.o}W|Wf5Ǻ\봔QJ_BSAq#Eqە#yMǪKʏ=&2+Z^STkDI+chi0X4'g d`;_޷NIЃ(T=>0!R"^0,bMmPV@0F#aWTz6k8"ESr1D}Ccp;T[86[=P!pBJhI+/aaCR#=C8GÅl\<P*%vcdSi5Z rGq%%He Tzņ0 747 Ply8&rC KBZH`M1xi XxL#|Ca$U H 4Cp+k\ν[S'HL #|eJBMMJ1B)2x{\P)`HUH'C/JHH#Ű5tF@QK<~%m|1\_0(#ԗ!,ei@BbPa|n^Վ=ͼJ|~Q 2 zQ7HHqWsO9D k]G046ac- 4o0$Ph-`P"ΐaw  ʅOLRo ؎oQ}n2|b_LXm| ~#*3/3Wtjq#*taǎp(X.G>,vaD8V#pQ JG#B C' 1#)D sXt;/jү1 m|zIz+=AfK!ކƁ%`y֤VSnG!"1SIl(v7`=\_%lUXɼi9þq%2ENi'!?kA_vrH^"vi_08$jmk6P{,$J3Khk4sys#yNh;#'⫭ ֆd C+Pmh2U&]efb1>4,b1b-GNk :H>L0f6\ kY6LB1& aoc Tr6I%Dj{DX#4}Q {BS)FTڠI aGue$]CQs1E N|# T +e`0Ht aP:RKXs}}F RDhNS')'3ca0M!U-52 0@&X c#XQZg۷{ݢȧnn͉<z:k^wof#_L]:HSR";JB^݄Lcm?~ 72W Mm_xغ~E%u)؟DP(y_e5QO+s,dԞ\ڳT,Pcu&b7+X,E JڊxJ0û||<1yPO Ms/r %40 4*(xRy=QېIh-r*B{up ϝ7iO ӿ<Ћsx7 $o|gՑ?;R֘doY!T>(`-#hu(̥zŵ*~!05i1)m`[ ? h'u P5 Mm璠ReRSf`t{Qf\s٠t3>?~=Buرpi?c=,A Xp[h&BeiS"1nbtlw7Y'Ey)u.;\P'LA"v.ɺ'̐j ->/oxZ?UzcwFZ+uNP~mrb#';T"R(0c)ӝ2xs,ZmgxwygvR-sv޽fNGY :^͞]9]WTR^>M<]6yV^>_^$~SM3+5?oLj1عh& 8uq!,XOTF R(Y,_0Tb2 Z!THz>`jZL[{w. +2w~zcς-,/_ Tn ˳1uncw%4_u, ~*u{qڂq :Ď{JHp&f%\=|ondSǤ`8 }SFb؍z9O13֚&Xa>9r4#^IG "DBT0" VqcuQOMUQkh(wN_RaΨBc5BLրO)K{*)K'M^JCfw܉hqN [s6=+rڭ?6)ɱPNl^SJ Fpbco_9K84=#D+)Yuؕ9Ϩ\ni!5jUTq<0[o';ܫoٻ΀_1gWz `M=kj$JZg(rE0Bd*}k4q<._T@fa^-TpãF̻JGfEJS0BBXΉ^ЊYbg;|<$)ph e:ocL1n w3jp׀O K{&Wd).T,n!TQ`1+G;9&!'EiOӸ~Q1MIm|^ػ>]?G7 GK%8 Ǩ#LTA-G/z&a$C7/|^wC\P*_ 57 I &QB ]ԶJ`DЌ1qks<ψRXyh:6$^y4Zxz5SPVCR=P _zȕKr TG5u(|Є3 eT4% 5t LKqyƸ1'( 6rBZJ@8ޡYYI ǩO;&qAMTHUHڱ?>)xIllYyJ!:N" k=QF*漓PQS>=pIj@xֺPdʢZ59ڇniI与4M39O jy+P- Ӯ ^x*렧 -rp*W߯w6m=mi>nUqvuOm]zO^wS]|:^؛.CϳGXSt~MUh{Eͨ}tl_1q3ؼtxX@A^ÃmpgRuewWE}'È{?[dާɉNȞNX 6% E\I"/~ REќ;șЁ '(09/,.$4Em<#v9> S;AWecfnT?SRrp$U6QZ4G)d(Q" E MAcy9-D/@2yL^ӲӹV\w3[!þ]&CWq"}/ tW=1jG֮뜱ufN.yk"d!*&m(ܝ#ߒĕDuB<84d #7> u 5]7_&vj/W}Ͳ:?]9>M/8OT=:d=ZqCs$Nc(ӯaUTl%z͔U=Zro_Ͼ pD3WuluzYjJzw>JRG jJ5PjIVd5\l^bPFWIn˭vnfl|{@|jhPՀ " " 0\F5mgêR]K`ALPRt ۞.+c-s[Z0 %FL8~̀r#'FARiyvFoT3KѱQw;VJnBj)B#+?'^)v q 8FL >4j3y-^5=9OiE"@Az`RGùpfj,>x/ρp%MI B8x<Ŷ B>v+T fe@zL)~ɧR9OM[n a-~8T(v/FR(ny^&6^/ժ@7Wv1nFN|{QEn4lQ[P΍MkGLǣ"[# dzȀ(vJo"@J<I5p -VG#u8YS   " 'BCӛj̥4!?Jds]Sj8/(]2.0O($wNu}՟Md2cF%ξWgrf}<nj/xr$@tO!H"ќ((*7NzDy?%޾iKBIB3 h6J*ʹ&:7JKzzOW{OHKEi{_PPEXc<荥R+"NULγilE] o2N{g2q%_@[|nWx߄l;P|k;[ଥى8ދzPz// \$Oe*JOPN,b3_9Ԇ3Co8U5f;`E3{dW Ʌd}?vz/))m9|bB1\lHPh{#j+h|KP\&1!`fhr!\p P!Td>pZ{$vOn@um8 g7!tV|(băi+O4T,Lj22'#/XrC-bURw| Uo;yв9C#hv 5:TBgEv+{`4o4a0*OV 5R@6"B){1alS= k0T &v-5@ 0o KaJ~R<շQ8ChC(,ZB[fc42s\YEJRbq`F$ Dp[dm+R/l)UV3[.I)2T)`Ij^ h'AkmP޹MtSJT @ʧjޥ6Z^.Vק?ݻq=h:#1KI8u'IuŇߓ:(8XxHxg-qqdߟ&!ߟ_dW gcA>#0Z44ޢΚ&BpJұ,8,z!XI +D;',8&6NƉIJL@R#XJmwhdԡ]hS/p(s)w9Zp*( wb狉{ aZuwB)Tw'T!ik[zey35݄Y MyW@ g;5>{_]\nw`ZPO8[=M++{K.wwo1Q$ƛO-O;+gK%`WdIGjzz6%}͇x7]\G Ywȅ|_ُ4:*yFw!kwNÃ!bdrl~I6?$3radσ/Ok3ք8 m3.Wwu?$)?j_{c= Meсw*Tֆ`Va`ڌĶ 3m2[ۓF ɥl6{ϫ˜RZqf7YG%rμ5 hʝv1K(VQ)uN<֮QJăjr"9 -5.&4H!A~bb *+$urMQoSTԧ`VJӊ;h\l9x (8݂)*XTx.,͔XV3dDfQYkr|#ts(,4+9NM!5i⢪lS]K: ,\ZD PugOB*)"L*TQN u"qzW-Y5tRD#8ڤ2,pɩ<[: .B@(QQfR\)dl;t 3nf.~`e~P|<Ӑ @/&ЊVP9e·+~*ZBqGE9)5 h)(҂+&A]Z&v-БT1qT!2AB`*t#O9K5)j(AgAN1 m͛Bʥy_v͗e1Ge𑒬'OhJXFWU+}񧇟gKQG-> rW&9k aS>;~Y:x<^?oiw˿[N=P?Nx}&l+">lrx9Q| S> #Od#Xkl8%B#m3}.0pd+c \R yqME4W5$"7p*hx'SEZAL58w>xΥA (ZDKf,/>qߵ5ώCidi.xbˉu1iI1lȦ_Vr0vj jߙ,fK-ߧ.ޟ{{ٛ꾽/ ^h[JlW)eC4υR③Jf*e Ht}@#q>z, BUذ+O%|Fne 1gDtmd~3ʡsPx4;yJs"ׄNRR)Mg.dR*"A{C8{z_/TXUMhG'4!]o%2()4Bq]HO;d헉˕q.(yp?,M'*,QZH|9hFt2`~8r:ݭܛ\UH8 4%>Oz-'2/܏7EwvCՐ\N2mS91.H!:tmtROB%`Si4QԢ a)EM?B 4޺ܺ)4^]Rfڞ2O$8b2R4ʙh8zR'ʢ٩-7TB:gަkKDCVv4Gnn8ުowe˂tv=9vΑ >ɣMq6eU{ iy%ionxk*5Ij 0-sC)/2pt¹7BG=.~YzF1!F#(: AU0EunquDu 9nϡXh*+(J<|_!nzɧ~ԥiW 'SUaLt1ޏ'_vzxqmc\$ގI^+gԓ]#SME 4U"҉6Ǎ+,,> %%W;ڛI;W#ukbw5MYUXJH@]Vk$FHvgx]8: Zdr(Yq1nУY\͓ CMo Rnu f;]߮*3d1e*>v D#@?~=7F$mD+Ltw vi9Zk |$J dDC&pab(F CchPj L9xH97hʂtb8D_0* aTDJd鮝".M O2!hɬ,ca TOz!*#Q8R *SbX3T0meuRMPCZj4Uk`#^ՅQPJ;B5zQ(u@8e rsQrOC ,>+K\"s%(" 1h oYu6^( ڢC (PCLtDi>}Ř)Nk_S\)r(^KLL:)0n5hE,:\~cb-{'&^Ζw׏YA5k :-("Hd$Rz]K_hmM 聧Q72лO$&y~,b*;ѻBJq{0S!f hJ X}b ݗT:X;^തZeaΒr.y;:$p"E-hh6ޢ_RQs":RfR\?Tjqe\HLD)m2fA^ˠ,HԢA 2 UZDw҂UG$E.A]eަ# mD1_pZ4Ѱk6/$(y` u&B?ph2t2$!3A\)se45K0byL>`d`,FX,m+3 t<IЀ Ndt&.7B!F ;}׸`?>$FƴO YFn}dmv}~߼# ٿLP b6豚-*z?SI"_ۤ4(ٟ 0r$> mevQ|G'zM ^TYC=}j~zt6qQEJebvrHF8jN+όΩs ֮pi{w{4BQ/GToL۠!n;Þ7]߄Ký>_ߔa"EE*Ga@oȩS3t'H^ ɫ".ۂ)`_fZOoR}ml{~^E@bHx\\\!0˹e:$p]'Iry뷵wS w@&ĕǟ9 ۬Qms?JAn\Tyt%H  O¨9Qs^P=^EA fBF"hO/k-cLrf",G`Yu&:Mi$0=~gLO n?`U7Yu F >RH}4}xtt50r %qPPBcC(VzHک뮫Kx$ntnqtҴ/BZw*lk4/%3t+{M"9_E/HrUu\ O՗E,܍#LnJ#K' *3g?~u-o5䖷_{SbwU:3GwGM K$KDӅ} Қb" E$ CA dGWO΍n?jbpYylty<#ƼGMώ&"NTO_ϱqO$i$Ӗ'֖'і\b-?O꫈`1ll>ue #ǽ^!5(B5G5(8R;NMjyN癔 jVD~)S0>RCFA+baUds# 'mƗbDWR) 9^96Ai{(gQ-u|_7 ec(.KbJJY5BR}Ԇ9@|,rBU kFPx5-"h  .+ISFأ‚Il]5膤|VA7iRe?ī꾛Pkg'z_|~J m =D߳felg2.Ʊ`S\|,;v{fiRhp`(68Y1>u -k=z9#47d#Rd.+{nƪ<6ekߢO<3t얼m=R>$?^ūf7"ƃ;l{eT^ѶmSwt)wFBKj`,9 z.C]q|}덞&6xQ+5q 5yw 92E2$"U1+cZǑ E'luZIX^3¡f{g xϞX`B)Ղ`eSj'6[g5٨szǰ3@94'|.?R2._!)wR˿|6gjv%o{ǶHNDK.{$zDzrs߭\P޹˅/%?g AM#xTp4jYpPK04i izH}9(OTsHܳkNbdʒ*i8Q sPQ/48R**D$P0Ů*T?=iys_vU4H{U$Ly#yT~-s=xH;krFTe_1?[[,گ34 jeqyd|<o],  !0 DƄelNM1`d+*2(b%xtJ)!89jawo;xDJS aJ,-W?7~R`MChhc5Ď+Ʈl ;NՓbtTb ,đXAe(deإ ˨|2paBfJ.?> tgqvK!2LW4%FLs+MSfEaJJ4SXAGY12-13RQh5YSp fa4p ?w3^ŜMt"L1xgЖ2nv cl1{{-n7W$X!$}+8_J0H}- Y&#]mUD3I33X ŚǑNKi G#b]-Mi h3o|!PQG!Dq(Y!mzq)Nh,TTOψqAgصόq >Fԟ%w 8SCQ ^=ꄋ@bR ^k] a`[ lO^by Qm^b*eQ ĀaϨ3D"$p3*9%;gP oST͡{q,SK Uk4ƻkK >/p^W!6ETFPBe; 5،v]]Ke,ƱOSE%@3dD2e2S)EB#@0eƬ_2qC 0BKdIxFz e% 1BI$JiSJ" ižR YXFDq%20`l "b (& oykkjuJC%Nm]seBx"v"H]Ңbtد Ģc*О,RŬwZ2udX;9Q@Dň) 8N"pq HBj%x,Q&cNdH#bR= /x`l9CZ){A[רqw趆^i%`=:jaݰ4A`#c>&KxFmfmsXORyy}x|HPoj:~Ԧ)G~_ x\mܧd2~-L v {'))Dxyt<32P -ө)6<ӉudG1\:|WGB-+Q[&w2nV5/ϰb7\2WŚ`[fWc Ьx,u[|^1p|gh ' B Y3;ݥiš +.W2ՓPʝ=ɋ`A-c#U˽ӝ."=b)1X68"ou.GjSe_.$27`]%\fi"׹N: [5`qLcUܫcC*'TqEJ,x.$$b.'*Ĝt-OjB3l%E7ӎSz M~7LPT1c2 K4z)cƠȽPx ˜D IF@6x [$=-4 < 1A)h1+4 `;sW?!-*Z"z;=5\t~r2ɄI ~z` _/%x9ʍ2F֘3tq7klj+?fEw 7p3M}wuJ7rOpq5gjoу@~EH{՟W!Oev?<NoԄ!4۠ w>cdC_ 7 ~ pXrzM.vJ)\Ⱥ'*q@g﫠#VHJg/Vb!I 4^C`8]%Tg,B<!cǔ4Iѓ\1R3\Y |aN񧼫@: ҼŞ>}]'|vBzSdau'w:=].7"[' TyE@ڵz2ڧkK Nom.:ܙo5R ]F}[ 6-Y uzSdǮοA,VGQXt^u>!Y=!fY@ QGEs,@+o x8t;Ton)'!+d&w  _л[1"OfNzM#nHb(?-*e(cE6݅c;{brqa았x_>C U 4&qHDk4.&?y]΅ů8S͸$AR/b 1LH"UHH(~W|non0@*d³؜"A I+B ja&X Jn)o Pp/rP~X pVL 2FD )Y$I L2HB&"™0V8n!3%sF忞ViF(|6֟ @RSb:yV[G&[~9O(r=Y |l/WӲ.0` V2n~t>xynt][s6+*ЃU53If3eS.mڒWTiPL](IQjg3c_7@_604S&ɾZqE`: Oxx&M& 62ǁe9MR$rQJl9)\%0ۀ/h@BWvק-J;KecbfxzS~|2ްؔ:Ydy[z;ȩ/@3njd6pb>12&*rZ -/acTxl-| 6[1AplZkŽZ z`JϽ\5}p5;?x5vt_"^qSaϮm`ؐzA ̚>L=y"qzȎj KRQOff)|VHS;ݷ_]h8u(bM_r7c^t@q}t+kD{wo/o`{;ȿx]QDnu UH>̽gPЀA(U"jJ!<%ð9sEE#UU^,IEfy.@ a%bDm<&7lgAOx' y K۟~ o.L{nWao8U˷zXxzz>n>V!Şuwp6=6h蜅Z/% 7N*YYYôWJrݵdƃo EO[;"sn9'0[hak Fb19z0&uYOCwWp4*PJ~7t6۩BU;_ȖwԽsnѕB۝GfZ&v[EhZtT[}u_F˛V^ƫuz*87:;D_"{ķȕxN{ݺnv%$k`9撟PB AU} Z+|`tT3;péRQdU4BY0T8CV۷mn&UbQz3nc-,4YIrn6i=b]ww}zrlo6N7{Pz PTk!5_)j _7(.eƍd֨7(F12 l#U {* [╕"7y_c}h>qɊnjh,.sfq@LaG_a~Cqb2&gX)y&Uyc(-rmR7% rMz0j+.+G 257_<W1ŵ^IiYŲ 2U:ѹ&I*ҌS_dG{b)EUZDU}=ľ]AAmY`>q yw׿ ~k\lK:` rwUW7RS_&:4"-qH^yUragZ|2z`AߌH͋zaG*UPUs~Xm8,)%L tv(g7pl=;xk` *ڗ7[W)/ٴ*/ \#[qƢ-VjX_  {6At' /xR5.gy d э+*KP{6OiZߞ/VclW)L/ҔjR-J%KJa e}~`m" w/0yk%t9?&\<$XwYwYwYw]fb0~ 7δ@032ErQ3%,#X3"3f\cg3SNroEj7 (jh[k%fn]FO.߭ˇt\,NT]t1/9c=EK[h,r H\`΅NZJ, !c@Kպ>ERq_,Dnx iI վV?Jig3V98-B @T)irBdlZy^uhT\d(RiAV}/K{*/ l[?Ly)gX)&j\psAoJg.}X %/e>NeKxéU%ySߧ8/(2h%idQ,T%udg;7hfY lnnqOkP}:o'm$ɓ{0pH1H9ll dXbORyE[(v ]& ?. EcGoreTNq"xLjSa8&tsUdJBIa27w1;R܄u^i 7,ctp4-Nҗg)K!4app NXfeJRf6+^gV|z nu9&X&#"uQC)H(+2~1ū` ǜEԎFew-/yM}Fz+Z"2gzc7Fĸ񈢳sJ4ƪ-~:Jq˴Q20=Jq=8Reb MA0גz~|%^AlQt;9u7c^ݿvW h>?cB{zeq.L)J|,Ǐ$ hiњ*UUE'8%h82B3)ɕʺ*KMk#, cQ+,02Ճ# @.*dd8856$Skeḁk4ˬS1-*ܸ R6XjI[J3^yO wBY2 V\ {|In=X-wtnu[9?N\f[ϟ^4% sUMIm <'58|FiJV=9ż% 3|c˜7:2FGb}c睳i#I$Pj6ڳԥZ"y`w9Licǿ^zp`U8ovtxp A`0S͐kKvR,SJJ1FQ:JǭFP]) '[uQkU dնՊ'!@S E gzH_<x_=y醑Y$=IRR,YE[b*8234( I$O2\lMF%p'^aMJvb ꍟ>s(1Ȟ;-0m#s0> JgtPcQ6FǝpPJ /e=niJTiB7C '"%`,`sgE|iYEu%0JAIS"XOz v\%i,|fmu5݇&MEZ!B7wQGǮm< "u{PZzƑC3q䀅j\?ځ}]~ϑj< 5 ϳg7Ǭ&8SC:]؋NNc3;B&Qۊi3LO `*`ypV[B"HXd0~ z}w=e0ge~0?e1D7-ƢQN-̓ ی%ZEPJZÁPr}V59RoTY \la06Fc?=>˒ sZv:;Y+v 69oSDCͱgz#r;*zYѨ:uR<*z3& ?\ErʘJCYu2yV.[[0/.omh#(H>FHCpcGj][vT"(Nktuu˺ڥX7]a5fDKּ=4I1IB3 Gd>0a|ݝL{RsH4F=ȝ4{eܥ}sg~Ѣ 82ޟ٧}{ 8l ^ӂ_:c ׿!]u;g:/ZF7>DWƔ)Sp*d vNAM6 nBun}X 퓫J^]Dw$EuA_R4!4;n]C|<>pK[8[ȫygI^'d;w[~#J^a+! [y#!j3?z<NQ+LHǁ+>V kG{EUE X>; 9aAاMǡEom.ޞ.ͅ/❴8O& sN菿 {"v6}1잒 f2_sFPS|m}?6œ+mZIO j8.X-ɟj@3 qsHynvs%#_|]]Dr=ݟ=_`yYn}z GJf\wG˛EG-RTR&i3J3qƨ$Du4&MZ\;>|y-ð/kf=^CPV%"}l4Q(*'g\ֿscg^-6ɖò-|T TJv~7}ᦘ|(1BOVsJٟ_ phKKzke 5˲}j>[JF BlWJXYe$8=ێև_\mYNzd~8/ݔn)a3&@O7fF"NňM[yCd'xH(6YN!A]#,fĘfC@Ep7 8-^4 ſ6rJԘ֏J[ItkIn+cCx [[A-X3=qm17*k%S4}|[595hA~|U !uC"aZ%?P.DsCaC6/UOeuF (ȤV&uN\MDERk3v |c`̳~mdBA7k&oAoX_;-3r_>ywO8Mm**̾Ԓ2SKm%ug"UrHWXJs#F+A#[CQ{\Ϗ*q]A E(hY9*i19nEy  g{`k.ؚje 3xrF2.$t"7<`H ̠uMD( DNavKG), K2 /iIR\Ƥ5pfq"JmI3e$q"^ {0!&i3+=F"X)ڂ^줥o΅yGQN.ը^Be 3V4QPJg Lf. @+:HtS+0~wdܚƚFY25?yf OBgI4yA>k./͸jb_8*“E'n8dbn& y.L:IF"f`PɚwiQGqZnF#o`^@/cH6'ES ̦r¦ "En 뗶5fv{-fO7*dnfU*6U~ 0|xe /r.sNÞ}B1 Cs/ 188-NJ`B^"uA\ZJFw0fO$"I|E"$p>I "ddJX>_ۏW{IGk#bb\d;/yw4Ἃ2L2^e u06:r2_#9mj-QvF ^$B}("9RII,ATKp1=Ai)uCkw1[_x,]uu !+GQ s5CGd)?dQqn ]zĚiq6(X}_tаόF%6"YF{?D< *0a<0ɨ(=D4#20kbD_ kpt9r]>uƂQ3PX;G p ovW<buWWkKmua(6ʽZS.;;-U$tR߲.gl7KJHݵ7hՙ$+۫J1,5ѐ Dt2N^8jJMV5Vƪm*蠟QA=TL7?RUзݤ5 ۗ.3$gZ{#_1"o^ȧwL02A/9NdU~*Q%Q ܩ,琗=5߿J5Hg7GEW Jnj6!t.TgCňIFQ&nZJ_ݤ0-D(T41r Ρ<ߡ(tHl-+T5Xc}ZX!hcޠ(vF^$(`C5Z:,ّ  tl)${qmLq`A1c *LfM]nB;)qa~ҾUO%]'q:b^ޜߏ+7~)ݘYig0 ]g4ЂHؗ }sĖrxQQ|,[ʘ5\w^F0Z{,GbAOr]=.Ɲ6=gUgs!΄ҫpQx9!͛9e}h6 @hn|ΟLIKG=XGiʅs)eC #:'ֵNEۆMu*]W_~Ya?/hh-H37WKgV\w7_G~zw>F$p{<z{_>O˯Zϖ>94f+2jf9b:F^|}Ѣ4[]VVCخ:"1'=vZ){=2$#J''p{5vȌj6*E¯z]yVe`! o?}hOP,^1oVn e)fx^2|?vf/ίkO\-D̅!T4NO7Pbo~k Ϧ) *ڔÛ3‹ܷӾnuCsΦ h0Pfbr ESg)s*)0c{GGFư`jZ: NeܞbLhiyy*uٷ|[0]K \Nrnb$ٌ:KL4nNc𥴜ݨzu'ϐvLx˃ۂ.MXNYϾ/|>_0$ B[(Eޙ˧emC( 'Pu Ԯ˺crlu8_fg99Pbs WWڎ3>uE~'mST1R8|e1+rqr-<#Sj嶟l募 e%&hѦ[pA˹()v* Su*D [`yỎH4DN][Nd$s/>g1RƟFAa4k WEseFm&#i=~mnOw 8g8 NLgS4SE1)ER'*IqnXme D'N^H%= moU7Hz2CQcvyM1'ɐ8(W L=Eӟ 2i` W| @CA0\s^ԜW{Ae=aPIq

HkA%2J奊V +j'`@vkaԟmw}w\tw7qzCHK=W{.`ߡzM`C apQR@G,P+2j 7ҜN9E09d1u CIH ~KJ(lYd/Wk~LXjqjØ:3M٪׀**c iÐSt4i%z>{a'6bnOvo4L(trINygURVUY-Ywx%[))&]AkӰ1rJ垽gѲKb0%C}gRE9i`!%Uc&BMh#품G̎2G6,f(1%PKL=.$RS aʒ՞p3}|_JhdrG;٨(RSqV*rsͷ/>thRS $PL)ц&P?E7Q[֚i啁;p@pKW&#ò)og*ﮮ޽0pМ"p?;7F<>*(`ve%8ńv.`t {nCZ:D% g {ISL;= ȍ c 5Đb "dD#Kn,PsĖ+}_; mo3ҝwc ϓ˂wÓ#h1L V94/hY 8eJ*7ժģN^G1t^ޥ !^lF yVEG &3Uz*x%> inˈFolӓ|FedFN;._v Y]48*?lCmqəR΋½2$mdlIA[VҶ<:9w[05@,s{09KIp$#%APVld5gǡ$ wC ĀI(~֣Y)jW4i]1xhڤowGwA2!#ܣ3R`4-evEe08g3ђ p|;;e3$ QuJJʷlG&uF $Dh P,h$˚h}$6:lH2N5RJJ?XnHB"ѣLd-EIAJDg*4;)sRP VT?FǑȖZ9Phbx2M٩^Aö5̀spe4K9}͝b=@jҠj*qx .'tL A3\##ݥ[o*QD#@x! ;m4Ip #zX{W{dzu{aj-\%YA״@ Ne4!10 YH"(=NIR-1N*i9}Pws_G0CLR@7ְS+eЋp6إestКje|- vp!0 {+">*.B눏=?R3fٮ(+Q7j9#ѳ2H@P r@@Ҡ4w!kbFÙzY컮DIāՆUH4I$Mc̠0FX ^CZ&!Pu[pyoD1ɉf ˈT_4eAo?}|i%!~}vY /;T~lrt@W4/y6~WGR.Qٷ!>+So˛aT4vާbhy#Q,&\-\PhLҸ3wІ 816pӬH{A>uDz!ף ר?/*؋ed:3v>C >_&Th ?V.?yhM2-|L>L SL o@:n4h 6&xI1H\yW tkaDsԞ &dxwwa}}a>D`7~twEih%aQOq}s?֒E CgԜea5++⭭rg/˜̝ju7d攐d9fwEJw£da~zdʢ?UFU.Vąb \}7]I@!d9q'3}GŐsUˢeg!BiΔHuEyɼ.+X!O>th=yzB\Q37AW)%:$r43z}1D@n f/t}^W&3oQty*JЭ41{^߼Tu4㌍Ur3ASɶծfJ68J^& VaMF|p"ƥďKh Ueۧ۳۟ZYJ|oy{wy=.=!rK/! NHqws0Bn.!u/Bj9#UkJ+QcW3" D=l0D/l;S4ZFKY6;khO9bY9ŴswK7{u9*/3@~2-oBeHwݧ"YC[P p}E޳US-erjB~}5s>/Eԋ-zvv)XtJIlӓc)%`hD }VdQ6:Y>C"9+W\m<9=kU? zI) B-EQ| 򁮢 h1&EߐTփɃG1\F>Jb$(4e祉74=6^ bSrWFgDFBD%t#ʓ8A-JFOD%NjK\EH9iC<^tD+ZMi``x|h-G0j pfq"&+S iI-)q0$UL`+i vQXMd-ª(c`ĵZYS~ $bﯶ,jU;ƈkB!ur)IഒAR0.1Jax V{`(G;Yrk}'DXpX`Gfh%9x  %>SM#a  6uLTt Um**pB: ʵldQRPr!Thk#Cg+cA(z΄(")XڙL1ԙ G䆎 2CX &5A9I8W]7v_Iʹ Ҳ!`ϱ )Yjkoв` H!(5"DM,.YmMr(¡,1=IcQ*2q#Y/sŀndn8 <"TSݢ$&)z`[YUuuU.7<ؼF_܆Ij\Eh\E"9iZfS,K,׌x!ڀ4`^ 0g+0 7*G vڀA3nȄ0GvTgFS&588@`OXɸ1'^JVp`KP}ܖHHKi|O&/v(e#!J!Hϝ"sJS2dp#^+ -8SJ9N{8 gDL "f=G]izAfG0a{Dh5v]7އ79`k$2j-LA]⢺>8= .ήUJc#2=[X cnn>}>$:z"_aɜ/ GYV[\﹢8zBZDo=I'Y~`kz{HCAI10LokAT"*˞z՗+dcIyXim~?]c wn[_B\ YFDzRzA >r~$+Qq/>ڍe7Pb":h2;4j6v^hvCBr͒o"ڍ1J[.)6poge/4U!!_fTT4U8шj\ RD'mc$(4v/Jn}HW.eJ#THWDhAw1\xB= q7qƁԨ[Ͷv'Hs7U]*?$Ix!՘j'aedx:%Ate` 3%}w/FT1]餝[1O}ŨQ7[Opl W{;n> 6tj Ou2CCzx%j X_]gPә!tӹz P?->jÇ<}𫽻1=/89S ÜIt_%As1b"AWNY -*1Q<19,H7V0.'$GY-@,vRbü0/<̋=Ůz($9.(R2B)sJI*CoҒc )t.dI9k崳VD FI?.|3We?3c;\}*4E[t=+רR *^=AO#iok9FT7s"Ѯ[zW6 Oh#Oo簃 WR8 ]6(xn# ڰsǛ]$[^=:3 tj8reL.??ן8F἟wvT94Eu)p  U[:Z1Qpl̚Le>7{_TZt~zrQhqBS-8 \vf~b$qdƸQl1-P HdȢ8:ҟEgi'Υ/]5N @6ڀP۫=:3ƚz׳Qjc>a%XwW" nlޙ|C*ESw/y-"7Ca% 5n݋H/+tQE[une&@=;08Hޙt#X$l$p+8C2JWYiW+чŎg<33ȗ2ٚ_^=,n'/k^%rT#ʤ7=O?/S&x;S7x9}yB:a/JTQ)u ;ax<igBF~E^׷qRm_.CcGJ(9X^[i ]@ %7Y- WXN\><6@EQL޾̶md6Qu ^ -WNS R[09p^Sg)̕"pS fj,pT└Dq0d8&aܸChVF"$zi'-Aue2X(Mrq8AUJ0K+=be29V -DEAl7 hQUأY(b5YԆޣ(E">E kA(Q%!IE:qMF'\wLWQzl [OARN{jY`Mn+n 8a/YP֛Ѓ*U '=ª4)KGkp,C9᠃˭BvśP-Uoa hTQ Փ~;?:Ph> kpK7ar](6nPo FA[5a\^PN]5ܠ΃0~nn*q-]%SS2p4^obߺzߚf|[.No]K~k+r&p;v[{s<[_q'Ű=:pӴaƥ` cPIȔXv5yMRs>-&s-8 lpz];ɯ7 3(=T:{9Zb1Dse0'4L'̑]U!RBzS1tJ*s  3m[GHu!O)a;**̨bBqGÎ)E8 NJH1lI@RL*NH9Y3F܅`~/)wIvra:(f.SOƗnE6[d{D7_=aVTbzDoՅ=guCvRMXo23v`Փk5Qiyv'MY6vɵ@h],:P/R S#K;br7-T їU&BE5|mj&78٫# oms$o{$q-dtC笌LLK{\Db\J _H2>bUٛ<nC23?Yuws5)Ce@DQMu z_fDb z0b?LBN/hWJO߿&za b%h4pNlifؗsxqqws _fL} :qj*]W# W`ag meUxÝBD!s[ZKWV$k1+^DP 'c(1|/҈m #-~ޢud2DH;C^O,DYzAt4Ī_8\:}]=\.>|מZkϝC;Ue.ޕzߖ8i_$qR'ˏD 3Ti 61ȗ \y FInZ(WQ x<_zg6gP6TUt+)mEBBldUܘC;c.x52rYNrH8o#X淣Ycj,s*I'"MyDsHkN p#>VRfl%[vfvj=(Ԣ}k;1awm=nJ]m Er?%NpNg nңݹYرSliF=4r ;cꯊb. }8{~M8ҸV4ĬXNڻFu}ƙr7* Qh%m"!|KE\ʲhy\Û/V#jWѤgz76rkDZL}nrv{bT9ld睳yFz_TUk͵1)5ZsUihqa[yuBB+RaijUL+jwiWH׊4NS\s"Y}tjڀ8ǤULT~S6Ar^X;g`_Lo8D4,mrEU(!7JWU)hh"2ŝt< z\[/ο\;!H˞49̄9Ujq c{OU&W*3ef9*T.+AUҪlRkk1X  W/>RR!JRTCf\[WRd)n@6Ĥ)@5T1}_ޯd@7Yf}u6eUWC t9LZ:4Bk%?c$;c%ڡXPnY %jlh|N[ "y6vI͹>k5&fM)ipd-7tv4LF}FV㯖R1R%6tLRJ=,9e ԉd6K,Ѩ1j!NjjFXɘ֖QhF >\Mq=#iq¿9נ+{oF&v#" 5YKPJQxͱ/AH7gIyPӷvT^j=%;a <%ԵiKk%o\ <&M]BwuO [:A2M׆=_Hèd}}rV2A+s-)|x(C NprvƟ]s1B^π~@I:b4xz<o5pkT5yӛwv^Χ WQ9?y[Sz8IrB,oU(ZPNY "H7*},Ie)mF)?T/1h;niV+2(^(s/?&LGn$UuPaA]v]VaДzHLx`)c4':OS/b0KBXCj~<~(篂]u'|(n 4j2nZ9޸Ҥ_Fӛ(=|D|( {$/f)_Jh1 4,3ԴƻKSɔLrG)љt<3 >C&A3-7Dne@,t2j@;k,Ϋ>4kiS{ZG꤀fғ3a|5y`%XxԑOc7xx>>M]ho%ejH?zrn.g {;fOR|4WC$M/ 'C]U)|,f|;hw%udX02ǵp MhI*AJ8ڑ&w8EYEdK*BQ뫊M]f 20c3dLpEOSM{^IL&3+ZQ-Ry#]jS"xhuXaR(Zo%M b<yS-0J!t>z=# ϋ?d+l ڢ+(~8Đ[{/(-WL|{쩽{|wdqFd?}x2㗧[0}G !+_ܽ๎Y=CZ:)z<BdBi{#=kxhWq< Z\L * jioap+%nȞ+T(G?TR40x*=#F'A#2ySpoD34M^5"P Ҷ2D,5 )˴1fxfՆިjD QZh F_w?Oq^/*DߣB,#jd7iF;|Θ{ͥ~)'u`̲VFCFY_wY5}!h}N]jm$ZKڇL˒eԂQ CRlm 2[߅؈52~`7 H$];EtE 5Ii(j7,]7(.ybZ#ohHh7Ҡ _^" m6(A;5~ҩp+]2>+E uB*ޞbV1ap;ZE[uzVi (o20*ĥ2ebLjJJ:O;g^J]Υpa+j p┼ϻdH{w?Lgު>|—9Yavu^Npz8МHPD)n uү}0m 0uk}mz^ :o9uA4&(`^PZ?L7Z hVDkNe>E}e@u&Qhm&H,$)ԅXF5h# +t2@0QڳB )O&ʾzKZGxZJZ{d-CB8+qx*LxF v&!s2RݮpS.Pu WhZ$ܟ}v܀&45$>($L9R:K̰,٠:pyAڹ8+G np|q%98V )dձֆ7*~5 ( \י t{Kuwoz+1"fǔk5(8D鞝\&.l㗻xC8J-m y r lz,a[qlC9ͳWBQaDz,nF6q%4=Q}J@xeo~?O ha I|dyGQɖJޤSnh-!L\77C< BpO#7A| ЂvxS.do,8iV\U^9-Z!g3F4&MfJ荰 ?_t٩=ڪ8# C;Ot;/4Ws-x# Z2ڛ^}6̀:pIT_m9񌐲Ā;1?{ `eϧd{q4|zJ7 )gRx(x]*P3]x7Ry+ 2_"S fcE۳-,;nwsk+2(e4?܎2cyt' |1|.u._ G!8\LHO|j2^JRi\[2E܃yH.`!G=χ+xBG { ԝjajO *|P|s74f0kO+*[+ژ+Lc3Ado h/Hn53y;3) 2-rZkmwj ? `3r~4q3û+Kj4=Dq3pr͉)t}W?LbuW;k>XDp Gw„l!֚ !WܻXP%[NO2N> J*_OUJz}tC0Avz -.L`2W~N2M\9(JXv`PMS:eYp\>1sS@o)S'-yñ Q(<b,^jV{׷kPe )W %&)gdl/k_u}1:]_49P5JQC߹\ N@LR{H8!B(m(2X煃yk-Ą+gKP؛MTE}2*uc3&85*StT)ۏv6WtՕ7HzHD0;$HA9f8 ad< drt&E69]SbJpVW"^0I;D1Y-RFj!26qVmYT Dfk #.uUKifSp sRJp[‚WQ1IqM*h$^\酃gw&Aװmp4՚qT7T|Xu(c^;}O>ۻL }1f/$-7;VmAGiA0__\vV ` ~!]8z3omÜcū=} i{MߑQ&*=߇W-ñ9vsl(I_ZǼI -is'x ^aX͵w_2l% h#bb~к\}B*u cެ[ukCB"Hr%jq%Dm=<䮸em ;M[shݜn6hvpsW%eƏ^t,/;HO=C܆a0(^Ј7R7M's9Al/}e^{K r+]R>Z4!TR|2ȥjBխ5zV'c5oZ<sk'O؃#աZdr"?,4C(X14f&baC =XXǣb*DYB8R;8"XdY`zH"o ֨kjBR] f-k7OK^1:&x;CSƍn%&@IKw hqV;mPK Ũ@0BKȫ!Le kO~\L!/*] )lMEl"/lK㆚X_{ҽ[MZ@ f űKHD@q4܊a?'BT{|38^makz e#q**{R FE-ߝ4Dҝ! yj'B{e3h=?TX #)$awi>|8zJnow; 9*_睎 ϵ;ZX7VX74GWej`xRnmGέO-Y6Wrlt\,&^M8[O੔Oβܧ.۔}7zdJ~d !}?- =kŒ2_ҍ2π+%{) J0*S*4Ij]g{mdMoia1x b{=8Ww ra7ז顃KZͳ.mlضSte5[:b )5ۅNgO7 SDn=p #C*@Wk#{JVJg׼zJV32JSXʇ?>(?݃5#6:cnj04Lq;\mHJ(jE#XWJJvtz Sx(c-z: e:y~f=2du_fD륍*"j7w}{sUEaD $( BQxhRP*ՖL'PL˯y|bS6|q\ZLmrlxs|ozՆ7l~!bޮb[+~HKz[͗_+R.yǀ!L7/vc Ξ;CvK]u.w irwӧmLqb{HŎ4v:4x`;GWrg*Wrg'rJ:Wl V/] d-9T^rFn+ɑI{ke@<>&5CjX,<ʄɴPδS:9l-lXy lp`zRkaSp8) -ݮ|t:rb3L4*p6D ea 3jmW9IDdDigts,s=X^sJ>as7y63JWG>JKBD0F t1˼,wFHZh(o$X3*PS9=ʚ &wz ڂsB&j8<+ϥH(/ek"2ch2K_֔xvO|Z3/ /Z>ԩiS ,uQzS1RaKTl@RnqlѫdK%WR'ۿ70AcuD0Z)qXENHYiUȦvu8 &hxTA 2HdB#e h5X(\Hbc(`¢h8ŸAމn \%8+~"h>lm؁Xh5I%FB>u J3=*OI6Rwb[hE2qY-I` %ҶU)(p|ĨXR#CytDB. a ?` F4u=I kD`ok\}¯n/CD(APƩ*!!!>= ^8;̄+!ޖg cp{@k!>-AVоnAVϱSmDEF#H)Ȭ 4p.}%o~ C74tJ2 v@:]YWMZ%߬zͰ@뙯36LL(I$1U|R_\> CQwFGѢKI|bE@J=vSQ?z/א`F+Zr􎧩9zh Lkb{L C@ZCƶYY4m(v8k!1BL4ZޙGG#j׏m峵و|p@ԕE^U%MZ7Z9"{f CT: 3Y6+(ܻ.Gh07myF__JGE*ЫY!J8O{iH,8%Ǎ&4]9:R9Mt>H|A:1DQ29!cПv^AjNKgOK^PO)z"M L(-֘tABr WNPΆ3mMyb ;2`KPhAZb ;ژ2F^µky8\y~Cw+[}$P=`x3m8cf .F_3 4ʴ݊l M!wbׂ=$tgsӵAz tb L!t#'fT 日lюNvVTowFÏ zzxЎР4#e&j8^:Tfbq,Ʊ؞o"3(1 8Vw\ʯs$_@[PLM$uփK)P9J`{MTꭗ_{Kc7o"h(D kH}@vT:QvΝaftK{{to[ΨĐܿis< RUqDj.ؤ;Qwֻ=%uq6YFC޲].)cxG,Ylonޜ\M5QfͯBxSXܐ>eUziF]x:<\j<*zq{7Q P-;#[OWB}Cvy~Cj99 9%\MK8wB)J;^z+%T#䇽29#ky`&ePq/b2J.~рxK%EUW.|sL8Ty}'lt$M5`"C29X /..1OϾȚsj/v EH=e6SNѳQEG7vLiYp OMS%=M&wd¿'@"=jےH{ &G͒6_ޘ:5>3L|R6ABp_gdX['^Ǟ23/3 +3/:rD[ĽK~S$&{Fd@W˩} ?BΤWb.&XEuo92C|)ڜw=89|p)f8eRD+J_iX,8է j| mwa W( oX=`- !&'%\HXh4OM$j& 3q%,Eqgg+QEDbxٸPzO:tApC`f븕䟔FrJQZHH{D*8."O;AR42kPH R kLpd?KƩx:z1Q*O=c/\o_d6.9{~V.ńp aaz|罎D% Бp88tL8[IXC4xa2$#?p&͹z]z=[lAwr(eǸ:7fri&T"@dP1ASJzM#q %P"qQ+1Λ(acU*Yf Q J<NGJм mMJ/D$Fv }AcG>6 Jp*$)q,#IdlNW k,@0Ï `CCr4b|"B12Ed1},@hA0]:[Pt+n*Wp8~s+aonoϖޜKyk"-J^8o_9w72!ikӳp?P7_zh?~Y+BG_~gaU=@3f?|F 7wKK~^~J!do1W%<_^Y-ͱ*h7 9p)^j43D"4--~biCѠ= 6V!m,1<x}Kn~cdnϫӕb4E0KR`HZq:Zw{yREIT;xH(yG-Ӣ* Lrn[4\\!9'db]Aq1KrI{I\pi1˅=lQ`\良hQj-0E>QoBeu:Al9zKXw"w1e9cCgʷp;.w[}./#<0Iץ1Ūûbmw}]jQ(##i:-63zD[11-mp nL s \/V7 6 8])4lf3 RrB;k;4`bB191rtyPrB,#Qk۹ʭV&>s "J "nz6Buͼrq)UV~{(uJ-$L2\||~&h3C潡ΕnvG`RAUȺ P ޕ5q#ˮ7ظEaLlNa_сkHd̈́ongbQj?rwe"2-,mCmW %ڞےH$[z^&%#[>)WP^Xӧ;\CBnq)iJNN)Zt)nABjtȩf01R?]e7 &\xq6wV柮fag*JLZT+4#A xLݖz%Ⓑ]MZTt+tB*L1X^%=#vfoW)2?^s/ƌ!CTɹUwHj%l!:uFKePPcŽR[ucB5gUXXᆢ#/=J3YK! ҔZXMeX1J + >TA؀D*"J0<k9d1Z9v+qU#z{8GZQ&><= 8A4b[ M9|m䓧5lh*wTaE8L2MEED[tzb /ҕ-/1"?NW\ϯ>7͉k$dҪ82ύ̩K~O-"dҼ:(~~%߿=L!,{?_a1c-Bp/fID~f>H\`gFZx,SX6& _^3s*wMZLP)\)%f;.m/Y־`T#[c:1 ktdXLcP'W!]YELd rђJʊq`Vx](bX[LB,4l0R`'.4HBSP&ͮ^.Le OZy+ اRkQU?Je=ah[jVȥS^kY;Y#DV[+DN9S&tĝeٮygf*3=c`ҹ`5]-Fu=5~oїwݢBǯ]Jz;z(KŞCJ),Fsb*$~̯F_<"3wJa)mV%8"eC\_9o"ivv6[FIE)Q 8p}ۿO*w\]9?} W;u,2?9YPmfa tLWayͅ?(6B$ilLt$=ȉ0p0CD`PJS1/`Sb,Pǂ0.TAPkj95Nʄ_-J=գ"4nv]A3|ts 0 fT[cH1PıZ ײSl 0:P!  ˣ*}mT EAÌC%um%C-T0($<[RS,uJ.W|4 v{G%] |tFMP`b8x)Lj IĦd&Sbk&fz39\1%Q6.4l {R1b^LIKavvcfW`\8]Wq}%Jlk/:B:BXc-^Lvj16?C|pn卹)qW)m`NLs;C(&9Ɩ@IZ%-;]RӞ1\Q#=Φ3 ̶<6_~!F5]kP %ژUҳWp"o22ɘ#%=KG#=(z7N4c}CB#ug$]"oH_M}w#hD:)Oz]5" S$qiwטt'[,=;= i苔zRY2Ԙ=Yeh/S LKƼfZlj%3lm9S/5F3A.`8dt7CfSy]fT0t"9HHgd^H0D>{l>AZlup&fAbn)_JSt[3=fӜU*Sk܅ CEэml\#8"Rᚠ ,TJ[Y@RSV}L*<5ȑ'{g0&} Vr1&hEkSQ`ט9g^J`R ̳;QIWlik ,. $Q z7EWm9T>$Vs=)EYS #Պ p mm{|ROb[z7Y ZcدW"@- ;B(a5° KA6(IZsf7ZzZ/sؑ"h?n*'5M%vC(_){F|23$+oXwk,^bO/%PI|FPO2<b T޸ [- VJյq&5,pZ1ƂQxFRf&&+\㝕59n XkA!aUMKRQ"jմ|մQ45Vs.@Y5Jp~)Ca8x$i5+*_7b#ҙ1’KreuoB)L:>TK}6kU _h֔ǀJM⋮>;I&z]QF< K]ЄY)VjK ,%K:7`XG E5G^zN/VdR[ VSXEkxEVCbzUaF@`eGKZ zqP FG2j C(9O}{fhV!;VQF7ŌD\ B)ܰb%'BiUf s3/7/C4"{dz3˪+rrJ;w"@&~l4,1h% MRP$er;o1<Jd*afݧݙ@a¹?A.{?no7}utzJ[ ޷ ];?^`xLJsLJ%ʀbح/bi1|_i18Dם@T (w CŘtxv٧kA.\J. .=^]҂aB{;g(?,1dItr2 WS;tp*wY&2 f:ǂĖ. :1C%yŭ{eQΪU!IÃGyfGY;:X Y~{!spIѳ彟9D5NXNdMYQ)K?}p6|Yp⃶E抧Rsb҃\*H_ZPA9WPЧ;!I\~˶XRhriJKIgL3w(#?<ՌIItAO%Hi@TX)cCt]"iRsj#BJ+23%u2q'KR< B7m[9Ԕ1EK6y<⭯{LŠB6!^`H&x~q#9w7;o᪕xo.V&1ȹ֚.LB-y +%W*R岕4J}ܹd1Rc%h[gmmMlSnc;"DkknHŗ=9sJ)KINqq dn$R&){P$H"@ȤDz̘%}O|RLe3j&Hs3)dv5zI5L<2c 6X%WJr0}H.2ds^';etf-eW {cJoT*}FNЅzzjoSZf,)1n:~S7>:ʥ']6tubelHBN\D=d2DzPn aA}Fv$ BWڭ 9q)/<10z>v Bem7ڪ[r"!S uiK1^8}DA}FvĝDi!N۶voSZQ5!!'.^2uLS >A5dO x_t~vcN+cr͞O1HIHdB5DYDKՎ&,O|e'ۈ:lCT?X 7R<;[Qw{%W{} ު BLBB5)ew7CղHh{w!ފM^H*A*JC2`:I%J-KܬfRi,J-JxF!yJc>e`4s8M+f{xh|gjq^D(hN]Sc9 ^M@,̬:qV3$iÐwcBԳp{ks^x0k\7DwʐDʌ-LEw,Nw6rd>ݍiqaz֬:\[ F|+ <+ak C vTwPגDDJDա2ZF#&X0+*Oح"7'Dٳ6LQNV_\+i3V>4Dsm@H[ 'q ^Ri_5Kʒ|*IT {9E +f_l~lyKq3gNaTqS7/Q/x A>UMJݴ&8QA_&Rc}9C2ޑ}gfP:]i$Ǣ/ % S\5OKh1G-T'`COGXNTt~['!\\a. :I >E}`?pL+wb`TByEJB(<\a4$M\ G&M9Kp&J`*~4h^N>d)隨saN-A Bf@¥& $a˝ul"RKxMRA$xpmM|/J[ ɉ $JLIlA Ki q4Ƅ$L8(₦ '\恲avjuXe,Pƀ9"v(橄j"x,bƈp#j02TxˆS*RKD _ p*cF)"`iOI(m'@R8O#V07\Z|q"pYȵY)v W1w Wgx᧧[\,'"x5z|P`f26k@@-E/~y=o:/8H8]d߹foG}L`82ڀJ0Q\};'& 9 J`7S.q,)L3FבF ڒ%*hpnQY\ y!ʸ0I}Qy±LT$UTn!(iJˍbȚ7_ !xƒE7[bƳh!!W`4+N'3DɎ^6Ԓ[E*iJ!u閘-cJ7^Y_qD+<e"F-LWcF%k㳤N& aDysCf cF3ɘ /p­7Zad^.^1Xc ('b)aZc##p (6@Yl)oJdC`/mҰ`MDrjSm b!BlVdEc.;dV1[%)"'BN,&2t+0n 6}h N-MpDN=JW7XoXl2+Sg4=1@ ZJm hmxHX ]67U-gQ=x}͇8} _Çq*xS= jxANn <* at^`3p7 2) fF<7\'0ҥO5܂ Ǥ ?&=UNTŸMU1\kji\Y҂M69Un,hxWaxI.2xuM *nߍ̯#\y8G5/AvN.>g>Ǥ@eFxv2kΙc0O EzƼi'ˮ휍eD#ˏϏ3xD ¨ sVMaΎ0gGQ@ު'`. vcI=ڤLF G}[{~pZNju{ ~?!h[4h3*5:Q: f5JѵVXqf"RIn- 'Rh,Eqℨ%WXT#P¶sŔxC.$6$,a/nm.00m#nYӸNzzHBiNmu7ӽE.&DPBkM=hqwobVnitAx2}NfM  ܁QF gdΜx++rX4d54]'VJ"r!X"RM\nonێ"Hb&AV)Єa(RE .Mr[!ƜdEC9'a)(Z(Y_48_])ݍ&;]ծ9n K6R`K+6Q5=r\4@3XV8d1&,4HE$O0YŬ1!:R•rGLjN,(U @BllpmD]<1+J_0O9[119JcI,F\YX,QLiSH$Qnڧ*֤tȸ ugsz r%te` g.=)F{𗦓`!0e`]9cd,%q//Ίl*x O ޸}܍f$3&q~d}o.="b7߀{8>N_ǺQ밂Bdc3+ Jo2&gWJ;x9fn$y}m:L#ǂ-[BAi_Pl}=8A/KJR;\;=o=F|Mg}!%/hkN7rReWQtUYHvv_-l%.;}?>CKU2Y+JL>"1m {0Ѵ-{ h,c|7J,LrP%dc B<giH;769.ޕ= +$ibuwZ5S }-sADww0pg\9MLQdޤab ׆ҿ&oe䩊wjwo ~\l|.m {$ ws3-lޛ#S'f'WJ07M利,aui⿳KO~kV>Q/vAbPEt|QGr8vkBBN\DdЗ !:uљ \8&ׄ&ge]'Sh QV=~ tɻv `!U>p3[vξ- f>1o;}%vfbH.:4Z` c[ń#T8$U(j$N)_x$gRZ+IDi%X!Jcv!Eֈ7)R 2KǛhGF}7᜺w֎+7Y.^U)w ޯG#&x[;lt;qf%IfԬ^<=؏ԗ_ŠBX]^܋\zCKX!IT^*p2=/F܋ NoZ P<~.O9+6Ҟ$qgtPd OKPs]gojk[197OBv 4~~÷CgLb5+V@t2v1:ݶNrͽb,K=wo킷*+U lGN~QE;7|磹Sba8IBM L~  5GV+Np@I_b& B|x*h?w3!~( ^;k&S:MWsݽD[p*Af/qtJ?wc0ۃ=VXYg/h1W-A{+l j^"'M:<&kxϮch,ߡ𭝿{(O;QLq- Sۓ5-D4A4忭S%o> F+O%(a'%#U.hXKjF)r%73T4"uE(SZEhzDX1n):7Ղ[ntJ- 00eRHj&%qrH)) aU`}WŅOˇBbM$ &y']P\7X'wWon~ mS+V&7]]߬a+M'saokLkd~)-.Ø_K,YO=;* ~3Tt,1?Eםyv}wL~Q|;ZʺI$̼Rvin>{sOsuvqޜ>S|Rٺ`.":KkI񣻾n ۛ_M?\^=;$*) agC^<|J%sw;OaI^urr A/X7T_%NִwIy=ŌYr0YAxUmlHNkM(  9 :cn'!>Œ1g'OpIiAؚ$,YQR~d2 ]Ijt춻[Z<UxFF{#STLf9*\JR(9V:0C[HK@4kqH5+giIkh#eB% g&M6*ϨY{BtW$(g(9*8Zp0C"zZ8rmBZB薩w ђ#y<^j QE5'VeXnoͤ)Bsm)L`7}:L8OqmO{tux+I&?ߔZͬm* Ai9EȀTTV<,O,QB^_ ;jjbcWN\IJh(e4G+a"v UCamPc3(PR дPX7#}#Le}H2!9/p~N 6PR C0^'}>ۡ$Qw,tb>D){M )1/ZNc-.xK-!ƀdž;$hv"4\me?dJӺDRf%d+^ `޸5Fֶ ʍwciF)"g еmX(lMLeo1ldiqReʸ1^<%QρSa瀭.OBl}iGEeK;m*h@R|Kʱf)^[t0#`iۓNylA/ȩ id,2o$H5 [e`ZQiJo0Mi.9!#6Y $J3 2a|`Z+\PƭH (*ӯ5z㎇2E7:l *;_1)ؓW+uqwokȾ{'fp 1E"FJMp!'CƩ#?uŹBlm)P/dì +£G ^3(m,xs7\G2q#T')v=Zp)gw~!zV}yHyFDZ-,ȁS5x?Zi^5 ART;1*dhTI%?˒rNA*LxI7SLx =ݱ“$K."8dio9 ܺX6DQJEƵZp F֣#vV|KӽR~)5jK1H4[GN1CZ"DV ܹ&Ca!/DR2pgtH*d=%J$ߔMVE"Z{<p+I!F"$rZ/AKT@bt1dZ af a0;$Jty}'K<%?G@؀ 3ҭR2 *R3w첷 `)}ӓ6}.I+}b$}Ϩޔ~NG'ϡuGǥJ8B2D{yW}LZx9rV4J4WVdx}7rgR;O/祄LSW1} v$g.r(z'IOI~.K)='>S Դ1+ZMK1n; (ª\/΋SjGLDY)5¢'dTzmkhYATs16 f|_V11@֫jq$ 첬 0"S9Poi Jk" +Gb>ߜsMZ1n7Y-VFaSr ڈXڂ*[e#Ʋ!q4X_U@Ry z(UL!x #23PwE@(3&u~Ó b ! b I4J0yh྘$T6*\]khӭl(m%ZLVQZiI>?uɏ_Tq.*[z{a97?m77e?-.f.K[sY2a/^.|Oc2Pg|x'T.vDb8D*%韲 ŒX:g>E #Q_j)#w{LhuW|-٘j5;p܇˪=JEC[Ԇ$dhk/̖lKĖsۑW!H?{gIxhȑLBME`b ƙJ99 ؙmŚ-mcaMx(F-W[˅i!cٵ!t 'dŻֳkqU(35az A?;~NIw!XƖOaùZJ`>rf%Bjj>:|h5,䅛hM-u}˻)NJ1H1>rSN/%zMĦW?n[))SGw;౫TУҟڻHn5,䅛hM -Yhޭ)#ǻzFЩټ[DVB^)M<#j9n3RK Ah:w0 90>ܣyFwm֖BqiB*,'H1/,aPM 'Xᬷ&1=AQe2%]X@=EjiYXoɤz*1*"*IFרE3C\ĈE:ϧ c١$\viFSEV|J]}hzzY;^ÌLxL6H4I&&"xG&+S`8rA%R*)'!HTBzT.s2RTQzw;6pd$fՀ"+ʽP0V}* 4:ǙQuy6 ,}N&ğ*jP'S};|i]GbzI_݇R̋a nMKbD""œ"[UeDFDƁzO'kڱ {Ǿ.hz{҂PcM-.;s 3;qr_-:)ZromPOF׃Mf{62;`DN@#1Ԉwo{97[P&zN!nk RfTwRָK!cQt+]fĉ1d'DNMɒp ," $bDapS\g"H5f,pwyƱ[->ЋX1@&EK,E+ 9zec!"e mۑ[,[Q`R{n6ȩe{2*P%%')zgPGL\Y;@ϧo䉌s]IvIAt'y.3yg.9HZ6ԒP/hm$OWZSV>lΓ<0)49X{9FQ%Io7ɓٻ顀ZkrM,$]Ʃf^iaT_hno"u ?IRNbkITǹxmߊ\lbm꺌WɝjB;S:Nq ,oZ5m÷E !~ыNqNcM?JW~~y/W1fCp ^1F٦wimp9OWotnݗ" _EO2Z /.QJ5Y#cD {3 c #U!T٫g6'-︇%66ãNXKٱ5.W4b;ޢZ.qCN:p{t1#3)]1Qb_3iMhC_~&ar1&dA;dCV.D9b΍\5DۂB #c4ۢE˝yW"S<(Gzo6$p!<%j+ᆺ ~KW֥d v} nA>CK͂R],j X`q0TnA}qRv_ Pޓ}?/.d̿ؐ( Q׋E~vG)R@K&Mwy/ZGވ̀Yz]UʒrٱTvpB:Vj@HtEEkJ&]v Н yFUդI^$Infć,Ajq8uvpzSJUuE1y=nwBoF' b;gBKR!B# x3b(#:j8 ^nQݣ3={CisAupLN T`BwM"$ 8.Caft}~J 09%Hhvt_2&H´9dѧXBY !1.(G%z::nf e{Xj7E jlc+j{&VX\P,$_^?-N>OZ+GO;9mc'-G,C7x2$٧^ցj;.岭@1RDdeq"s ݾNO?:-r&y~yc/Nz3 YN3`Nt3/=_g'O'z;}v: i[ޝ;\~֘VjmKEgb~5"HۢľK5% G?;ceH.>_8ϳEKD[o_Ӊe򓇯N/>3k?}̵68Фڇ:0&U*h ب"O_VKqO{~dWW?}@b sQjѩ`1, *k f>"JGF.~VWYG̚dv6 UarY:HãZiZ:MйHEM_5k)~J(Ƌq&f-Zax0bu?ѵ"$R֔HMbD\2xbEzԐM&4 gE@aKH%XQ%‡y:VEL٫<Ӷ wx̑8W撶^;0Jڣ=]Xe0Ȅ3nq?0YÆOw?92>rGvjnv16p v#cuX_:%8hKD_ȿmbc]޵|OA$2#tm2عBfI:)iAj2Zbbo\]DUAÜ,T$32euhjy=' x#0350jr3QN֮)ߠYĢ>\AGK5`fXUpEǃ7U&#%.IkԣV!M'R󑦓't0zgv?R/joefB J6&K_^[ۋh$CTK#h䪾) ,pŒi 58Q$!5oR 5@fSBC^ZBT$T1ky蜂d(霼 .Vށf YO^+W ηVh$kY//4ҝ_Y["֫zՙ84nNcSI  =Ż3݂G_p9-wQ&2}c8pQC=o)CE _ЍGxg n~v%m!1%ݾ-ȀI>+mq(gʺ &9'e*"*k[* T^M*46w()(M»( MARHޙTNLJ`?Su ƀWa-}; f/+L R(<8PKbH$o0Ab^aw?YPfWm EhQ34;Gz@h4KAιSA-1>džp\zZ$-cJ cQId y(d2}>p| 'i>{c,rAҋ`2r0CP(oưN h;fK{׃B-z G|H.k2T$0(RHl{^d( $`<(ih\_S}aoں$5)tSX^p"ZW6 MJ+c,V$PWBqR/>[dHGϺV"\ %J))P]#d9j]JZun88#c**,R -Y7% Y`VE`oqlϧ}!'?q[Ho-P#QHhBcO#UTHUޢ vX&O.>E3# #·girA !R Hv  9Y+">bv. Wt(AJc, XEa1NJksQۜ?}Z>궜O6*U'}9LIbU#+73pU;;GN赼BȆtGjÅ:RzІFȖ\ξSPp8Q8I^iBg7 MxOoFBti30.Վç[^,c_A0(Fv1T4*U3@e ;PUr1Dրk~ݳ@b$ ۣ2}/H%SlXY7 ϋujm|Q. !cXd.JSqeJPQTjֵxYk#jl:zHV*3dy? %zϲBŢ'-WF2X,K^KFv2VC!,.\Wg W@d-b \+96VduCHKAp%9-NQ+2)PQ62Ģ{ 8,Áa'gC bU*)Y2KbR<[.8cZgX«ݖkuk]^a8D;.s% `5͙EmvjsWևyp;7ˢN9ۣy^,nK"04^VŲ1RU1Dh|ұ2vW]-܀N|˟j|v~9;$[,)*ޗ?Kwg:1˾R7zʓk?~Q'O?|Z=m.K+OwmH~ vR53I;v2xm+%GGK߷(ɶ,H><4zW4+sK%5FوA+8*B~.38eN%Ba;PcV{P%tD(ĺĉ1GgZ1x䚬},DPIPY-+J#*0oZ0xsUb7].XlP՛f6Q*d}P3J+{*;"I3ɲrc{].-Ԑwa9T|gp Pd'M p( .-|)iQ}ȜrGe%1,iO% ~ o-Pa%V&g}4#9,`l354BqO!蟤n)0+/nf8>bEF#9EhdpaBmdT̂+ʼn yZwi%R>:ƞqgOuWݶm>e§^&'ľ>y]_>P1UJKT s?g:(.=E'#tR ] ^B'kFB2|-3pM۳xN[{Eb? U{;urAaԴ݁kz& Woϡ\Vq\'d h&zSe,'6 .ۢDPb{\JJ^TF-AŪXi&q9@}A" "PTg4f<$ l`Oමk۸]%Bg#5RKe%0 ,qc|_g'q%)[7_L26jFZ/ntȷ<4\ZjONh|͏ė*o?_3qahƅ7ںK^4gǓWw)}_@z5:Jnj.h6.l g_W:u>.|\xSolJ*|7Fٸ(ee|p\Ḑq!Mr=\Aј?,iLӱEAm(f1kd:vo7(x9url3ʹX)()#׷VBSJG Ekke!8EÅ<.\tt0b Ms _ɜ@|6m41Kxf ږzhU^)j T'¤ ~TcTN;B|W׬ƧPuyB~P#C֚(M҆H琢ʂ UQ~U`-':NrCQ[b0Z8U)Y|S$Y@yU|1;OH~A$R h6QfRHN: F\Mf'"rnP ڌ?wR3=4#шEU2d39Z]%IBfL] JS]I8*p%~ZTPJ{%[O.TT+tB|\Jz8Le' C1*a<80FDM ^UlJc.Xnf=iDI,QSHSUلp(5v,#u!zP5 A2HiWR4 [z{x)| ejt&EtҨM01S樝PWPccXQȊZAO%d0eR5B )EB7d`Z:ǨjrR#@)h.8 XǣeHU($^[E[$/_aJݾ?z2KeA\r@:3]7^ (%F `HR8"XJ}T ٤lYcqdža= <|8Xu޳#Җ2A@,H(̐w%֡͠c1nJ%ʶC~ګʁ]tYφJWrD)"%N3i*U.+.U )51(䣺yr/]BV1@z/nQa-$#+ۋ( k|4PGk1Z^Ha &=tj v 9hУ#%Yą0t@Reek*jfѭ)l^K Ya8¡7> :P kb.mj9UGVKGr^ɒ ߻Sأ EŢ7jV+.гgvooc %Gڤh H< Mgݯz](1U"YEMY0 KZD}`–]Pc6 oBcT)B|m5:ʥiRƗZxA4S9p3%s)eZ8Tvw->}Y9T\͟{;[!YΊu8RV0ϐllZFV@a\=6ݪBCSP} ڐ CLdhKʽV s6Yb%B p:)֋e;,XK,7؀m{7vGHZBT: )%fn#E=X2b,aS2>fZl3։ؑ>̈!tuOWRP.]5J#I~%(zK {: wp %B"FgZ2S>ۍ(iH(zHēaE|%tE9EeTA#Q9p)& h1椩goB݂0rHBDK(swI?B,zނrY5{a$}0 tR^1dSZO႖$((61@>f"'Ϋ\hV{GI*Ah;O2}ÀfIx\Σ2E ulYBLb_yud0M@wٞς;'V?cD/0ܖQI: JpVVQ6؈5Ip#G?O^C4z8pĄ4_4dqTW2ԀZa Q.emԷ~ wB b :A"'}H>Bo>hՐvx\*⟣;>٧騹V_ﮐ\^2u(nցѷobde@OI_}& $6$Sug;o5m7Cp^[k I\ŠXN_~u-Kי+*ID9Lcb!HfiQM٢T䊖0>($ L0Sˠߍof͇V>4_FFKί:;4BD)s"W®aC3}G;&>]ssu Be಍W=㋕D ɞ!zs1ii%rUDE"hy#Xz8󶹛O-d'HgJmk6Z3Z%/2dU`*UF3q^,RpbIIJ6w{]>fbA:͚͜)m7`7>hp7 }x?_6;CRu pK:/S Cc!nT; =6 ?zx۩;-dГd2:}x t ۼ*v(t8 '-'0#%sڮĴ()n6]f*^_ɠ78Dia`{ QC +rqP(@<$<2pKVAedPyܨ]Nʪ/;)h~`3SO읽3W|296$%lrMGoemG\w}0˵\E6h,9?9,l'fY34O_"}r_Nٗf8}≽'\HkyJC ^E;Z=R,V {]Τu0( c 3[Vbv}f8ãFeX{I @ CۨQjGϒ3,,5Oݦ FumzMWJ <ߧBVG ZQf2`t33cGBJ.pcDAx>:?F̅zlŧ`Jrr:3 ѕ iS~OrNZJ#,ͻKj! ]]9LQ֧ypy4ʺz\^z;ze_P1K)Fa@DB@8!'>58(ωu6]вm-3fog#Ϝ(NOs5qYa&O TW.nO (DrEŪWgNbw%(cd)fX|1EbZoŴ^Lџ% g 3L _XRr屋f]jJ5U.B.*j`ND4|ˡ`tNyvw"ΚaZ /Y7cA+2frat.a[cJ"qh3N+e3r0 C3)y% Vmt&,̌X豴%7]0WwfNYja*54h 4:a}ZHyУĕy}.N bt, ԛ0|XIAxEvTy?oJ4]/k~=㴈q}f1t +7qbj>t;@ԉ<`No~y5hf3aLvFӶ³Yk]fS—Y_d ,(E4c&Y?ОM[Avl}u9x*b9jU@줓Ļ̇Y #^ؘY#PX ZIJYjdETr y%PmG/NM5ld[Ϸ뙽8wB;&fEѯ-U(ޟuS<8= omlrS\tKSIbgdCQ;GZ5Qzkpk\b-9v3yh_/'zB^s\f!. zIROC%h(9To!6Fҽ|]fV%`z ,K $~`e3,- SP8Υ'BDm~x_erȞ ~7fV)%ӔݍU)o u}9Ajo߭oN7~xm֐z}} OMQjpܨ>^¨n|U?,!nPYo7>/6^ayFaIAn=vȠ]FVRuxu im4\?.p6xqd=VQښ@k_]myf ƛW?{zk^6[yn}"E{ ݽ@:;I^›y8+JٰE:s娔w4 f@O% bV;Ɵ]`lDvu&vutk LiJF"O6hUWr;Fs19f\ّg1eH}.=RxęD21WVsSo6KͫjnUsk F,QjLx-V5T @V?6]:u_ܘt^3%BQK8qQ<#L5F3 0ÌHIq&tl5P8@\Or4ScC';N30E_9tv/ Pf=+~ovpWb4\ L {Q'7'T gt{nS8˕y:O :5ly'WoQ-1í(Y.ܗn_G'4 *AիD~"O YE)I8`2s*'NC4Shf{0՘lsR 'NpeW[iYFFE]⏛Oɷ· [̠i'N/.32_̈4E>,Õ\od'][e9 ic,^m ^ô"XY^8,2I0Bŀ(*cgAt&{ ]>CŜ5}$Iu pGh΂u0h̓'R_I_Z8w *L+$xb@$!"D1.v[ƕۙ/: **Ol{NC<6ux*cOsT%(i׭:X +eK˴\b7tag$B0F {&g&inOFBc3peQf]+OYڤN-٨tN_B1rA8rciz)ϙF(WbށAj7-zRl\NU CasfUL|fTw5QirfB~ItL)%y9BDcl?Y$=բ歖bpYye \Ty H2OZ,aؙHk4Cscw7Avr FX&PJɧ HRKL!"6=a*F`f;n1Ř硘v8uڹ+)ϋ?.'7֧!ʩ ` -Vn$DZK-9ưUɼr FȊ0dXԛ&C.=(H%[ 'AF*ssDޙ#%E-ZWXpIa2N(\+s "bcd.hao<<83\ӚjJUx)V9(STq>L.P0T8=D2A2W9nCSGJeM>5Mi#UXTYF/ Fc%p,C?,32yM˴Me.hh!lLK;TO'I:bT%N}.Fb+%^4v:40G`Q,RInDp$JEA lp  kJI\ ])m/ 9V un 8($Qʜ|A1"EAg Ńea2IQ= ޵0q鿲a+NLtZU(QG,j"l$Kzv.@2*&oz5=.ZVSD*Q$'$ @{ ZFqx``0F:AnG1VICY$cZ/!J$}<).B&!YEHAx RQMm/% E4ƒo!gZ800JP|dK6ACh.a`Z+CFI3:c` p>.ć`{)ȣ0MT`~+À`3g&YK~opH< %wpHeyɸSWYQi;TSф } ($W{唏5 p`tP(U{1y޽SS6B(ݚTװc#K=VmZ{w-tŃ7\2t_3f]kYIJWhUW'-VgNB GMZkʔ~\hյk}\h5k%0zThն%NmBK6j{d5UۉP -ENo&Kߪ효L6H&ny@r|N.zFb#P)] ظhιt/~{(y퇡X %Bj=*rZ]c-"Y%Sp Z5F~EΟxxr3+0A@a?c*S8D~P ɆG让I!~enRWGo$9腎VC}3| >lF4YhSƐז㲑J j$Z4L'T99c DT<a(~uҠL;뻳 ˧)Fd .wD'{!kȑPY \2w\vuK4qóŬ➭^?Ɉ|h{T~{(Q!rˏfWc`!lnwsGn]Oڔ`Q>ZYߒ>!U+hxEWVtʡÐҔިRƀ-%rh7,HZ7@xJAyN+/myܿ Э87*0fxKA&#*6KqK'O;3R9`]mD`)DP0cŘ`f9um#/\?#WvrVz4&p7J6vJ!MfcP}\rt%OVmq/jVyJțY(VzOXx~9 {AsGfWBN-,kOШO8|:. Es.( YXGˊ ƨ g 7&l~+ܪi12Ҝ_ cr%%1|)H&DE@R1lְitD;"[9}<\9t5-{]eY9uY)>a I$kx魱UQBF!4^Fj뿑XTt9P je"0bħ¦S*]0S*EQkϖSxɵ2%89[~Ga=N7k"!'ί wTka_ZB@@љ_Tjn_߻7;C9d PM^ -j1(CՆ$}$WW.ߥ1(Z+)o:60%Vln:ІuB_Y81f4 ԛ0JZt{b`PI*΃~^EvZgK%e`+R$u2Ȗ@i7.;1Epox/;8=?WLO'WA"bxrv6)xjwtYɄ*R\s;xy7 fe8WGGٓ^}5O[u-OڎLwEnlޛW/_ȾWiqhoOtz_Tw?{{W{;d_]\_);>hfj!-b)d ໠~H''#=d':B:W6MTKJme74gzٗκdsLϝ9?fy4(m+V||[@}z,ND$W "Wb q(H U*g&VͺT՟f%ݳYbax=nC?v{hv߂B^#<.)m]7wj~Tr/[Cۇ2<|>n3\_?t]'r8-:.\xrȫ#/%7i2Ѹ_AWoajۦu0=/\~*P J &>^L]i0N2N7eܙkJo lbظ=3J/Uշ{πN0gfXӾ_ iylAgxY|;}YF@ `Uy5yV ܵO){z ̇P އ/g~|U|Ӆ" Qv[/){ssfJ;䧋]}?fj{sEE5Ag919|'=x o*΋~8-M۸v%ʏ+Wc4ٙ.QlwM>}ԇ؉BBh|%빺ԗ8︤AUVܚB~Uw[:nF+ |5Dݻw׹KTm2-(ιRLJ 9ߊEv,rEsq8,OR&̠]@NCLH OeuS" v[5Ux!DwS)m6A ht5|'An6GA|4ڪYЫOuAAK7B]ls"9{"tuw=#)Rޜ =3Ѣm}(]͡Phs(٘%1@@k&O {"|Pegz_ ,fv'.[`2 lm;toQc],J՘i}D>֕XƷBjh^ IGZhэo&y`jF`уS2[PF<o Њ-EKc*u1cd[` My=@[7|=(mo.;`*bs&0Y70B A=UIY|}s^uiϕEе,́Xp{x{b n*H \EM:$Cf U"ʴA0^r;x@ONZJaTlY #FAF&gsDV00IIoRcoR"WmiJT Tl\ķЃxeHޢ1@#ߎiBu\ 7ЁxnI;``VG^6GSX*Yo5X5RSn)!U2BXL|;IKɵ 3gHd¥Bf-# ^AN74,@|=Z>^NU֓HLK+ YA۷Ёx {9d$dRz}،*'AᵌЁx͝cPt`j4xa!X+ݿmlćH+"+Yt|ȇ6D-!j{|L"T 7Lmd}|G.;K9*j6t?fod6?g}O^5"mp>E{?&q̐s`=qqc+; 6B"-fDO~}0A>hf+&/Dvö{#:F*k6Zx>jac5=.Zj@8toG+p R Ǵ}{7aʽg>8Ξ>^F5Q*Ͻpnr~y:M5\ڊxߧ͝R yiUjza-OuS8+60_|Z'bi7ǒ g7_NS}?OdƩ7i32XTLQ"EQ^٣+J # qc 4[C=u|RIw7ӢWvMᄃ..?][!ii+a+ri+c+4 X#ü |~`Nolc2FP6looOVoj!dDq s+ibo졸z:{Feak86 OX0{:-<:3NT\v\xCx) Q)ײxb09K֖ KjpueMΆ1xbq%Et|Ȥל=tWnPciܰ$;76OAtȪ<|Irc"c{pG$65a{ik>_D ͑figh^}ݲqxVp!)rSN=ӄ Y3`ofw5/x+r_0$/ Lˬ0'li#7>(psDkeDe$]2R"UV&Ai[ `H3nkW}L[&GE @A# %\`jm 9Hʬ$A\3!_% 1[eB`f j%HOF#x@P0zh }Rs-lŢ*YlYdNÄN%d1OvęF"&B ELs&ɤApa4}ЃweRx㬊<# f*Eu:\ zc$놨9$U-} 2fوp)Ko~xTܯ//?Є_i~$g_3.{ϓ?qNvR;qVOhiڤ3ؤŗ_Iĩ5a9yUJ0f՜~N7όf|4i GpG.SIɇˬDOt>ˢ4A(!98*'7j0gz^&-# I|uv_#qkVlLNJ̒rNiTD)-L,Uk*EE⒮LtC@Β ݬcb#7[FJڟel4{#2 rm0RPHl˹@**\YyM4y[v^;樻ԝ'{&#zjk>MIۋ,%/>g7;t"Khd ~b糣EpZ]V.^i 0ڠ%q & dky5" njnNRfsU9qĀ"AkM:V䑊~"e~D2ZVh栋CO8l-q];wM@!wroO {lM?31V w2jcFZI<&2QyrCT6ȩvD}{F۶F﯏>pk2h`iGzp%x(u1cd[` ) nqռqpn|կ??R^@H-1ZA]%tfr^EG3KVC7Ho8Hb;/kɄgSA8._o,Kkܣ3o?>>;O__MwF}g*oz`}m^oΗNmN, OpZVVM3G_Ղ.PW 8IB(pGYLoԐ[8ӿ0V>75m}cR3'f #j +O@MoĥWj{ dfW&4o!x_z]CexC I/c\巌H]k;ދ/a>s7F_1ZãZ1+ozLhw*fDJh%6]?>SO`űVR4\(s#xXHxed;nXP/=ݴ/AnĈyFD%U$s喖*Jwu:T(g9YUv!N(jgBpF0ϭ%B&pp @H.\(D9h5yLQ@dΗ@SZ$, xAW_t51δѵ{*y`ˑ@Tk'rh2!L1#(t.Fb;e-_/޽nyCFvޮBV |;$)IB)&L|x}CoFՔ<}p(ZI3+uڻj0jaq$YJE&₁E#QYeKBBWhgәNR߉l\,aͨCUC 0y}#*tGS+bx-VHem} ц}^zs5jbk]wfdhl}NWb%#bq]  ]˱\E6e&{:T50ͷ_oH?ש?B1V&`M{ f7\[{]Hik_8EڵyP㼹A6E5Rܒa0\? 47 W:;z,`wcBDT0\ETKQ{=|9M_ǝ35zuT~ٻU(gh7qR`Ex?8gJ 㨾4~PՍT(zGj b]ݓg]SF L+,3DLLZ -/ P4ʕZ{3 y)]c( wY:ߘkliJӔ orO#9xjR9`B'Z#ANGΩ6K^Ct~`kO] Xc9j(/v4VC>s ;d!!"+zSPrEE\bA@ksڡ^c9j(/ s9@6b5 _̐`ܳ1vJhxCD ԔH=mJ7@DңUڼCFxxT{:jVvM. ]O1ddU(w?;:2ckܝg23;hԲ!z&Eimfy&R:˕sщ5| ~t<].|lk.> (7s~&𕯖UE7jEhiTZ嶡#1 DGa_ WȎɧDM;gܻQc* =Rcn5,!uY4knBAڈ͛\Y\Zmg="4ѥK$0cpEg#UtE $4G‹&a*[ 4 }ꊀjUiS/GϚg+f+jxeݩgz`PFny)=_z&.YeGCĒc/gFt~8MX(Σ@TjH1Z$h2(,hkLwٌWʖhD4Լ[?b?r{ps܍^'2s,RN*`iz7T/͋at$4pV y@K`wFo]e's?&3 >& ' .9jb1'J'JDm X ) Sṃ9Yp`U ~ѤS+l\}),lT{7o=-UZ*Iuq]AQ#x#@\#m&bI!IyM:f C(( t/JZǹ[wjp1w'Je]4lbL>#8,C#’UFoa!(2j#2!0Ӓ*X;~/_7BA->gxюfI,y bpd{4$f.\%j(PPKUT[&AyȢ fZr)<>I2XK4qipǓ UJEO)Ƣ[TSjmj,bPE j, Pe0%-:zgCCdS"HZG=: K 03NIGC3DZ>^s8 oBJt{4e@DO{[dKQW {@ 1λBc8K&,K@E 8+)|W\g;ŀ0<|A8PWzyzH1Eѽ( :=衉vvJ!_gR0U=hx%0תwo.CĿe._d:#9k r ƄB'\Q"AGK}#JV_g! oz?(G ֜b['0sC$(ѝ#sJ?Վb*S",@^Ud^ uvͦZ%RS$O]"a9 D]Qs\\SPݺZSp:rz--)r$0]\S(:Z=C4[:)2R^'a_\{7"kb29YVM6]Ϙ` XXo7Z׊+g/m5kɷW\f,cvA182{98k2J_x(Ae/˿`g Pƅ)WOwvg7lbT󰟽[z2w$_ބl][\r0{lcE)Xe.Z2xHIF3- R_㝝8\ e ݐ6킻]F htXu14ݮъLJݾnVE cHE=E|ǵqhA%{ZkmчYಯd–i(gukߌ|f~}67_ , F>]?7/̿y]gͺC5vl"Of OxL_C%-B 92V&;z>T|ӯLU_$ƀVqNx׾zGpmSiu[9X|yqa5wc6z2?_j%ʠWהR&ko]W-=pmSrO?V |W$=wYr m!9a[d ?`#(4>rg?yΟOz9;y k7^1[Ȁy_, Y3s-4?| m%W-ofF8q1r$-I|(6Ai0 G" AA'/9ǦudDH1R ) i[O"唃k09 K*PW>| m4mQ0{zR#5C޸3Xӏ_2C+ZՀs.%(vrћܸMn\R'FT$9Q(wEK@8v34{Kfhs)V hsIa[`` EB[F Ld ^0 (d%`GSfN[cMR6RI Eh9CcdkF^)Jd^ftMV8H%ɣɜO~iaIBL:DH1HcNnkNThKb Ô9'IKڌf+YjрAR=h|Zj~yVsm&5Xw)e2J+8Z1<-b\I./"vY{k'L!ǹF{ $}0%I"hi@JA1X(rƂ'f[N}D\T/>4 gƠi>Aɡ#uQGisY<ДAMPGHhzoմj#h GXQD#Fc̵CNPr2̟>#)>Jwz۷&$ u)  Y%( ;oMOB6Q0Xk8qL\3)r7 4os$"#pαvHȴE8XiPjI܍*c\!/Un 2PdƆ 0<ܣЁp@vUb.9r g>0A@%DO-:Jo~BZ]KnW&䓜~i_gS'>0 \&jSw?N`+L[p7ѣMHe_Ż;Fדoכoj)^eG߃e +i pon@S cVF0տ3RX_O0>opoSK(<>+'䪮G?}[6)0 IjMA܆Wr<i~ ~y ނ_=kO?3FCnYfrzGF4G6?]\Oov?|改x? xcNFK7:xo&zXESomZ!oO~:7'~k3h Z<D)36Z1~"pdKV}S|d=3;KF͐Z+$Ou D dr^ܣ;񤓧z4PV@ Z?J@]YTJQ ˛N_3&^':҄rmnF/xq&ںK)'/-^-%J'Rd߯1| I$fc+!xO7:N=uq\Q^Q,5G7vgބueO(‡y\nP4'ϛp2q̛@k00N[0Kf8Qwc?&tm[Ml|u5; n/xCB'%(huL^ +Rj*4иx'Ċ !l0pe8#;o^3!,M1~} qɞ(լ|/8;$#x?G=h1d:%\ m<8bуR]역}^w}Lrp xKlg<: ߌDd[=bETggt64XN+t"`@ L Ojv 0% ]-3JNw} >>n&hve&jaOы1;m_!%ϐV]}}sww~r:;_e/Y+f8CN~apdZ${ mI1"!B,JƿA֨b nt "RS2XD8i[,[88BvFe/r<=>p1o߷^KZ]HGI8u.ϢC\B).Ŷ0b-e(Bqך.Ri#Ed6RoϩHew(>vm~u7A{2˳H 6g*z3V= he>i!@|dC4#@G0F+,=G ql/qq? fފI3.O]> ʰӪsr#P̨E.]TFnQ'WSN,$ B@S5)+DgqĄ󃝴ˢw\1CpC{ˡr9A{c HDRV~$h0LQc XCJ; &R2k3F/bveOa: ~Qܠ9EՎw,AshЂ=ŨrLM,7qF1=dۯI/ ސ>"}N؎ssh x)ҍ}f0g9 8k]C> uZ ncY]}8^MgU*ʝYI>ʒnSGм.-}H'6_=LD&PƇXzZ[yNf[=Kՙd9$Lk>5稤s󾲴uujK{+ sw[0U/_0e=aVd$N.ωg5ĎCÉ{O,u{~)w jSHLIFҁWB{Նň'(e[EtȨ1sHB%Fnޛg J( w@+l`QBƨ-UL4{lĖ ML j8}d(`pQ|f>{!ξj!NԜ;]ģ>r;46pX> 2G@9բnC'\߇mQ>NE*ǙqR*% kƂM#/Da"GN!#SK46ViO£aqV:xuNn'Y+t3-]*g5j'v 7?_Qc/;H *wo?f R2;䆓T!TL3awWvl<=ލ nM?atlaſZk,1X!4F*s(vBOP!b,IA)#P`=F& m8g$I7ZÃq!Y.x}!۝7T |ח/߼~ `ݤ|R"b#,;ka)u#1^`5ӑx>_PT" .ѧ/r(Ax!;WVB勇SH "{Q?=wĒ+ {T/ y D33/0a!7,ĩ_nث}n! w8/TW K[L åW *mu {vœ̂֎r!Vs1'qyGg#N1n[h~pj ޳RJ PX CDY JpPkZMd: `,B)}R)!6Qtk .V_V2bk*0aV*: IxBA'L GCP~lL簂797J ♰\۾Q dPӜaXqbMHConFe*o͇ůoͯZ yx5$܁sÃ?q8ˍ^бTkvFjn6?oõvz3J3 1GDv)ҊPjϾV}:~#̙_4>BMWl~9fxuB5h\CT$VqIa97\JE 6ŤuY $L0%B޺ AjdND >=Dɞfp uM|O|7x3^o Uw+Krq9WNG?\ߦbL"|H|'? ִlqdc1OqU!+i~Lg[-{7_NlmY咭=2+%aJ*kfmnk4ʐ`RKX-)~*~kej+͗y_v]%BPNІl\9fbU$* }b+dv^A{۽}{S 4S%qWpt>+en|E5u4FUb@aB5meQ}ne4yXiz!@<ȃw9Ώ.UKlj<Ϊ߯s6vխ|goܫ/w)zД=u3o@EXm7ǘHH ~qn_]mGf?/ T})C/F=hg۠0ȿBik$jz'*%È!* 8b,688)V,+ JM[:Ǚ$=fZ* N 3<3VєJdܑc,聶kL%-,J/Qj$Pklh2w 4s޺9McboO,Mb O=јU27ZeP;.،7N%F8+@#'>-(i#/D(eG#F^En2 5/>o[(PE:O~)@9yR;E0"p 6Qœ֎a]6*dи1ɖ ǰ^MJGwS&k3[7<'zKζĚc$ vY+h0 Ј{SQ&b*\,TVS7.|̛qCkJM C㈬7i. FBYc] HJ5dv yԳE30'ĤCP! =S~; >)8g%;eC!~^gs7#sn\x3x *lRϜXZ/i/hV[<5}ptkVMN>o(xt)V]0R+p`x#Ad ]?p%4TzXԕUL`1Vj%هS s^Yc}$%zR%߸gj;>Yy qr 6Ұ*zŃ.3{+ W1zES͍(5{,gz#DZ_)) ڼ_觝 &lfY"Ev{6a]mɦ$2SbIϕ?Xd1OYNi\kkE(,` 8d-m.qzxߣo.fpl#KR!`s"ϊ-gTh٫>)Ĩ4Zrf83!Lvp*Ǩ!Fv.]< O;d|ՊBao/[![0poҼ,M/CL&̬h:v(F]z1qk[ZPIg)DT1ڋ~gֺg Z@>?7ڂL;e:Vaf3p$Y1&v1&\*ybEXh5Hq_!s$إK*M73r(&R*mW4 b*\VJ' 3GpO#';:T8Xx-ege^q!\#"HȳT +=iᅒh(<_(ʹ:#1^F缎$#`y6LmT˺]5}hs3RP׎foikEk!vU2[FeBU1=b~ك7nBx(QT(P2Ddu F>?$;jJ;Jp{u3YfV×桘CQCf:M1iw..<'cD{X-(zB8ל~/5v͘j#L1v\,V"Մ(2rr]|(+E=. Š 9(#1"5`+jγ|ΛĶF/oKȴ؟Ԙrw8˥b3HȀ:(l%~_i䛙'%/rxM.(;('x?A6EG89&,(8 T˲|Mߢ“oBZVf?7jxQX BrV&. fyqZPqgē1ʻo~#q`0jIxDJ5ʸ̨3-KhVSq3ΞRN œ6uJ'Ͽ*PlX /A&2fz6G!d)ބ؀r͵g?o_Ԥ4%޲mtF K_@ YFy*isy6h6 k>K5/3^q]9HF 3z'C^Ejʰb:/V+Da=l_준$stՔ"%s0 7Eo@*C;huʘ0PCtrCB2X} 1__My@O7|#v߯0io8$Arj9?w{-/\Jr!~ Y%UEDECTu*6 *o^3PQ8CU2gɊ\9JzAJNzH*BDvOU 9k,7ɵЍ#Z7iMų(_/>Q×"QůJUrfF,jQJ gB"'=@/4(Ъd[ v][$yG͐sœg8Ca1:1<9OW. XCY1aˣB #g"Aoge\h]2JQN~kW)zJ(zTMhgh'aheQR}L_FzyG.v*&\RB")ȅK5՛Gzbޒsq0Eע]NG (PTQ5jyb3 sfVS|9J3/rcK@94fSy߶TJ lKǶ f -P-9E5'ar3$_ƾ !g0僐aEyѰe+z]8&c:E}Z|!h(/fU/j)]g0;oj4)g y:f@[aaƅ_SRzs䟃`kÝvs|~&( 鴷 íFa;$$mk=0>1O8kDO^.z6 ؘ/jBttO> )&/`' {8n6趾Sk듏yBKHtnD z"u'״N?%;Vkt)M@StpQ_X# .m2YzRUS +@}X]T -/ 0nZ@`uQc)~&H ŻFV ǾmتE۟y|hU#rMP%Q?oP)/"/cBGQm4|\Dr~1 +T,V+e'G=!&v̟ĭPT)JO+SMUW|""[`H 3z3PFk c"KuZ$)V.z{eA.V(>TN!7cPaq:Q֩e͞;ɉ8;ɔL7mJq:lssfmfL8Kp>uH71 4VK\Bs  |׈Sx7Q)<1O;TJϖ$n[c.W_./#OƇH1 P2go֊71ݴp"K*^N٥1?\]1WϳKx N7]f>J,))롤æyTļL)0S锃cvXRǬBˬ3UX??,!өf{o#386HF0^@PʑL'n/[\Svh۷"9! DoBp6Ct EcQlx]"5L l((\ISk((kp`J :\+{&>Ƙ2%( Jd¾JE8/#fem\,^&'&-XҔ!/ό N_hfBVpHiݤSg$9_ OQ4v<8_Czs~RϜ6_oSŽQ\rk,p2k;hPc w{*1?LaWH>`L=5UpqC 9C/An29#Z׭7ze$r؞"PwvF:f.0 Ϝ \,!hؒu> g5hq$<.flluɰYՋJJժi~*Y1vD5q뾄 "1.@U5Yfg6@CiV T]վ 2\mқfVIMiR몪 ?8d/AASh`P+fo0r,ߩon8qƂjY1!kpT'; TœYQ3{gyɎXW1,&;ry1WVԊ:ƂQ03^yr_ r5YUY.\Ba[,ڋ_9v7eF&;Hy5_ܾ5FH%YvL"Qm $*4$KXNa %.S%Vd/Zmpj_jU.u# dI&FɚHE3<_f aZƵY6x$(:Gf1I`]qTVLagYwx57sDA S5(NuhdCK$N9G`/nq}n!QYVOIs% fTJD7D;Ӥo$'X3Tfbg@ft R*p`p_ 0#dzqpFtx*QH* OaKn<{WT[pyKpZiYk(g[JEtgZ|KtBqifBV>mkxi4^ gg^vbU-3>((LjycTk%Tw@tH=;d]{ݙ+1?np'o\6}^Dk).ǵQhfXY|~ƴQƘV;B9Qp JnyֶH!$aEtR9Ɛ$K^Fh-ZD/%,!چlf'4h  + gwg7ۅ^!):qF?nc,c<ŭQL\/"2{YXf-A=\N,GfD/vvy*& sdC\n (lK ʥ(ZtWXkpkX@5)'̛&mD EPd4k&IܰI3@4Oe4Zvv{q>;Wm0xÔM f!s\ּ||Fk@"N^VLQ#'8po y;}_mb|Av vغǛ7|k: Z\0a@oRZҒ4 h~8}>n)c-7w;OnH#X" jB翝tfoo7/>aNBAw;-WI)-̽7鰣R*C*B043:)*)(Qv'xX?>྽A&@r@+ؾۛ6T -`6l/ֆ^ _,3HONPi>v7n~.G+ۋO? > ~+SH-RJdEGe:G$9ԁpः{9mS'ƚS4ܞלݞpl! H,4^,Cv\5B.@*;*սZ1G,o4z%൬ I{@=D7_Dw;.dD*;Ҋ,iRhE#`"$<_FvKIC6oR4f5J@A'H (r3-c m ]2^ĬN#yGcxRaI9bB!䄳 , :<4xӔZB.I i$iUÐA޵77V+JmP3Z(gX)I'N&\.}[N|eK:>#qdIVKju'3& 컚lMRH2 LZ^kʀSL?LojϬ_- @т\TdņGuHLβNc֊JsP2Z*#k!a.-HXN3oJj% " 5**㫞}ӺDuHpKA5+=AoHOr;EAu#\SSoR44$3OGW'y|A|k8 ރZg=+fbM ڇ`2b >?knw?`"Fu ۄdz`l5[qJ[ ~ݽٸwfEG0RST>2{cjǹel>PkSY" @C@(o1nQQpЯJ~ "N_=^8%zQ\(ͷj_w/_z. GI˭2D821neO"W\{+ =c r)ӝ=]dfH>8^n:ǞGT Oo[ѵ/z!Vm@/C.N+ƭ0-7~6D|Qȋ_l jLit刱ԂIBF֞kiZc2m2Q멘O~6σ* oFQ!R=Bi Bu7_?zo 3C@ef<\fϿ[gntsJt-MB`=E7kv,K@T}=04C֡RL"Ҙ j=m$6α 9#~]\i䒳:>kɩ$ Ũ9"cTKh ELjzm͡ojŕ ={Yi,1YrH,ܼ4maۚA#w,Mk&c5RA0 րEMLCB9he+-{%jp5}F'\sDxzfW2E2Q(zQ yS~jr)XJ*0%>=JQ :X2Ȅ>X&)ݙl|{8j.h_\de,_M~n'+k.&c8w\qA)pO.I֝Cuvܜ5#쬞8ٯf~?;gj7nqk6OWOnbǰ?z}1V}Ws7#l>f)Ym gŵ0 /+ i!<fŻr'ҳ- Xh<$#n!hT bD'uc>t0[2-E4EX^PVÛ.f[iP!5Nlk\րy'4L!ګ@(+ǥKdFAJIoux\{Jǻ2T_iIPQ;ˍ9N˘%>WM1=?GD5[}5vɵS=&@<b$= |z`|Z4??㞁Is;C{gi-hsj7v7L~l-ڽ>&85k+OtxMec9fjE帾\Ìr9v ixpV5S ram駈!NL71}(SW¬^¢G*GxArsdSKE3/+f R%9ᧉ!Xk5Z.M%o0u]{^2ʤUC mC1QQY N`i7|LYn׾5MEuiek* F= s[ !:%`)`ZC1>ïy 0DÊV0ب3{>,+{{`+- R`&>ijnL7TiI&%TNj+!6%2%Wʔ烈j|KYW2;,9w.)8PxlYVlˊE%PUι"6'@D,CR9Z H'|`_&RIj%;o4mmdžX͸{G?N&[έoG_tM?aأ.gُfK!}Uk7|r׾ЙNn`o!(}G-dſ5fnn^׷o!Y=)YqnLw'v<.[&(o+6 2<}.F݋ /~X= UUzs5k ǙKczǣ;>xO^N?t4(o}ч' #5E8Iu-_PhBW_LW KU/uVU4N S1ۈJ g!=}yuyY#\pU-j4iV;ʬd)(X 羄VòP؍TH3Cz]iP]EKsBNYa:JfN܋JQW]"yʋ ]r'{_ydy?hq,=`%/|)V!wfZPH)P P>(S( ?tm@x`1;##6H_'}!3ھ$ /T{Ipzkv^ {.`jc*qml;..ǺW7 gHjP hv>cw1~.s(' xo@hr{}޶B?5^?753ߦ٨u5ړe.3k uhSKO/PF5_2}{VGGD:=}&Z۶ 'e@쇋Fol۷[mͨ )_[T56caA{4G#F :˥TE}@9JvƟYQfﮎV|:?1(wu>5x1jEJ)xq`xI) dEFY ;𿢖;Mj4sEBUaVxuU[8 D, \lEHT 3V8,fVm LRPA"+ 5{"{lK XXڹm&u&(GvYLbݽiKڣ:$H& gaXTcS*CP 6F2rюK &V{7Vhd4N 1HO 65.IntVy6p2[2!I.Af@VsBt2 ARn6ҪN6#T:YЫx7.3"N;.$"蕷nP7cMjdSY8xL2tC@:(%?' bΡF#8=a'X Jk8S[J{.\PI{;l)NOsVCy-l8 288kOri"jsֵxI!k.a sw+* O^~b4Ow7n=.J8ylzwO?̑PwUܚdSP lL'oF|oq$ݜ;~Lqgvr;>]jOTXhԚcWO5=|i/+:5-#ia;oB6&_,g!JAm_y呐T7PmvK FtR8Fx*xo-XCB"$Sgqg7kj@%20y0%J2k,8˙.aTjܷ N #nisDFC#+w4k>i)hɼ}'VSyhA'7QHwoK'{sćRFgACho7ٻ9nc Kb$/3c'bCX{efV8#ڞkd{Q@r_Ud1sՉol y4ȹ\mEI#l.֎駴YS!wתj2NYL=bQǾ7u]Y@wKFUQ*7Ƭ`I>Yu}. *<a3ŬD=ʺZ=\/2ߣ_AX{l|ր71jE~nVc#Ы=3YS;Ŵ(hv ط#Ub{n3|`1N0|*|2˷k͚Y%*WSش1zø 5G8m"{+6Ɲv}1LJ؅6*5^鴫g7c݁"rU@mK(8RB Gz ?6c:S }$W 05{t]n 6Z6%&PMD#S]%+6t ]\a'|6GEst5ղz;d6`"QVh 8XY&L]ht6` E4IOyL FtRF wOxJI1^,R吐\DdJtuFICm L'bGT-IJ$Ā1ANIՔ 9 ;8 n\F4]Fgd>'aył{+ ^"$`+RfSV>OLM鴽["N|1vIHQ7`2iP0\.Nlc[[?GY"都 9MS<%Op(HoR>@\oP =5IMfq&a `;Ϩ;TJtw*#tT_(~xWJ~bՇ>oum:Mv_쾸)}rAJGzQ(]/ ٣9 CcDloN>C/<]7|XJv?!cyWB\̵586wEFX^,(ɩ,Ϻ slE(ቾ$~Rd(jzwȝ.z4{E'ٖHL%xw7#)-&6:IV3=Zr7]SZ#aK\߽idpHe%q5ȩVSn_ixUqV2Xy^M䆏Fǒ[!T KPX$V`N!Ƽkk^D O/OwQh7F;J!ȍyG -#CWdl#Gs~Mƣ7 ?1~[կ:@N>jQ.)H[xюN|cV1MˇHC[(#g=j9F5EB|}|Bxj|#6o >5 >] f\n[qKob6P+f-?^Qm_yG#oO-NJF=t7VodzhQLzTAvT ,LGjvҜ `m[x`4Ҡ|~p_un20#\<4ks*㢒JiCR2 q/3?\/V]LT]O~%\B"_MWw` 0 ;w[8Л9ƊX/ Y#oԐ c͍ދ!yLL:qM3"hAjQ䔷|vhՖ! fuyg@v4)PS7זa,@^k(Acj&If T#\٧PhOe97?yn~ E4E`#tߡvkF|yA Q7Z]bfxk !&Tvv|["Qc7Zn>KPvK2Hg.I2Oh7([*16m;ijhg@b~jv8_9kf,e^C'iU; ~r Z̞! ccf=@p@'fr6teS,L}m"i=4wJ2-QZh) 4F; 4 Ljne(wkj 8|n=Yv? =>z{.u}Wol ]}[6`&|rM!{E0?Sc/E3{?9z'h>-)1*k/s2Ob. JH*B4u#\y kj͔h=mn?܆r[ܨ[n3n:_xx/^ڕoaj'-$fXe#ki@drj^)z)$k/ބcE﬏_ UCxظ"0g+&@C?Qmwxx=BbpKiebywHNyBͰc?xLVċ"Z#vf'^ 8foLЊQlmhmJ+?sel\|!WbΞYg`\84!xwN%FRWahEd+^4+ hF' McZ( wϐm+T WK)R*!?K%}܃mŹdZrh5Džos==6^(l Pi7 8m_"FY]ckd{x-:$Gu2 Nu^iuR\Ox#/elno8VﷂsV]n#ձW1?Fm7ÿ$ ~p>{(7-Fs;(1Ғ7 G39ՌF $T~Z)ܪo9C3~8O|cԐs˶U~Qޝwrq]zjkX{d}^ Go{g;qq:7j/'^?ޮ~tlfzjyͰ:$.d<-*Y9Ęi :F'yydhqFdsC_$ -#4g`: <X}f&q'헡,alF!R{>I>+ΰ}sNHpcc~[;]?lFt20ىvF=ݲH^]W"t ZppV /Q) 0*$sz[X6qv./gb+T Nj1,0ѰȺ1ăI["s"\` ^iEq`R ZR.R{GV<9)  BjBp0uwm=vF X," )ݗaM,i8,߷xҙ ݧ $9}ů>dUq}s{ jhwN깇t&c8660f,:ltƥ\VPAYG^!==){@6dѪÒrctLam}3LփحQR[O\ERO(pHBdbQ;#K \t!5:pԠYZ*_I"Dy&ZOatܱ$Y&Q58XV .|uK[l杷9-=~S8R\&WnPnAVI~p1" G4v 4E?932Rb6߷]`OZ2|fY(I|xi!*M[7)M?ZcSwlH!2[Pu&UoQ/C̠yKݝ_]}'^hsW1WQu6;Q-?>YsݣZ9F) 7z7%UvD(znCfi>yAb ^ $ed =%u JYd$ɮHgӢd n; Y c :6I 07%u.2)MiN n9aQuc$*0:6%M4)^kR 1cJFـX Z T*+| {5Oeytpݿ uOrArX6Ybʃ46 &-bG8PC1ZCU}z Odڟe2qx6} ЗBydvLG Vx4CxiD콨(FXzhK^j PJjImV!E,cX:eU l6;Ɏ_.[]!,wH)'zz9M *:)l}O~~=Ն'x: O'8>0[=7G\?c k8o5Ãa;at/.f8t>]W9ާ~ߍ'ۿo\1mײ6:u-1?IcٛeMy%W}ۼKvwg'NϧfM˳}y{U^\opvqJ3DV[nP1xYS\ѺuwպN.NM [1yD֋A uzcݎWtXuhu!/=JlāuM>uŠ:mnǫ"Ru:mXںuwպN.N9|eCfނuŠ:mnǫ6Tu:HYںuwպN.N|:u߄uĠ:mnGK:XܺvٺN.NMN^/кi3^ (vTS[oG yS/Ophݬ܄֋A uzcݎWrwhu!/]/cr`ݜ0֋A uzcݎWԮV6C^8E8拹Z7S^ Zv=U޾[wG[ yr*+B:nl""@6F*Uթ]oe\)ũۊ5&VP1uX֭fݺ;j` h/usBYRLNAj'"eJ ̮8N%L)KP&CD?J ZthSCIAML0ʮ%i>;LEM^" .,0u}uӫOCW|/URPkdn>ֱbMMi"` #Yk? $IeRQzLh>'Tgdހ|'F~s'PosZI4(–dl£YlB=-^ovMLd (>MzD54`-KtRBGU*"k({d.^PC->=; =_,+novֽdR1&Vu+94&N.N 7mu8n &bT9QŭNzEv;\zp~))"4fH/=^wIv !a m>?(3z#Ag`l}R;V T):k È1:|tքSAg[D$l(6킱_țsδ7H[\*^XVmi5`i)mwMt6J=Ag\gd#nJ(T(:M ή^]1WA39( :BcF꘠Xҵ(f-ZuPĥZԱXp(Ŏe1qGR  -g :k3}H+ Q6J.a1J7[C栴HYd-$WIRAg@pf0|._{$axJ,=CG%)k++鈌|hKD'VG&@v?1R s;PPy1 %HIcBZ铁]Jt P0]j2 -+5ͱ8}{ 56aыAQyoc9g{Nzh9))-a߁u6Qv@6FŪϧA b|;6C^8E'sF^6SZHQ絀mOu t;+ fy(Y{#"yz2=Y||ߙS>zZ '\g^\\sc)TCru_]]~|s6'ߟ +kKξ~wu`Se'#cw9e)s~._tl\^{{sjߜ0\?Sfj4y%Z;3&2|#w嵉g͜.khE}I'~$dV埄ب ss2Pb|,S1쌕.&N:ixJ/sD0}GC>UCPMP7ʛWCj&J͋b/>H gId :p 1j`:hĪj:1b,H S2 jAL%)beL& 2kbsIk9%J Hf{^뒍Z) SWs,A'0יu "1;m [d u-6M`ZG6(@aVpSb^Edꥊ_\9NjfT%? 6YIZm.GVh kBSji.xѤ!QXJK>" rJQEfV{nFf)#/JFPCt/9SICR<8ѣI|:f#T9}SV2WT>Q%LU:&MFlY%ZQ44Uꘄ66| <(Hҧi-ѳΌB{& %uڰ5R#< dBhcJy2fjkJ$1!2R }эa! '>-R\pg>$8D$II*ql֍`F _(Qs er>Z t2^nt12ZEdE EK<HS)~(''>c1lH&vAD"uP*EG·jCóCjDkknFeO@*=,VdW `$P$3~C]F(& 3NTR2EϠkthBTaWYJΕF)cRq8Q\ <;aRC50O:,Q .5 8 $,< &5\@S%;cx)/!9W MM;_2 \UA)*1`*%# eCFEt ÔTelmq2Ā6LZYpbs؇0ۨ>nAw8qSu r7u𗶹<ЅԳ~?#q[_ٹJ8wZ~Y_^u$ܽ+?s<W$'Um֥|7)j5_y3iz,'PD%EZ8v8x 8'hwE[Ms7as=`tec[ S9-02 SژC TLsm,Y $+rK 7P)߱7B%p\ zYCii!D&`x@/ i{s^2**uv F45mj% ɼG[IRl1*'0.QO rărH[ $["2&9kÎ*@DCswS #Hԝ,KI"š%P2#ZbDXeR 5> &9{ǖE"Ը",A "1VIIKĥȔA [:θrtvBr4,:W%wAU}X.iL< c 5[Q*@Uq]Ro/GB/@^Ls4_e0x-5 R+( AȮDV r3Nr]H N}${,7Fs 8N"c Q3FJ-QVY/Ӓ "@دL܈t`qPzc/6\b!z /aqڸD@կCY4Xx q*=L:K.^~rhV 7 u\z QYmO%20Ÿ-<*X+}Uhl{eb. N KF*fyhb OEݤF琦9e.OF).He] 7;' +M+$xP[2 ] mR8⻒kj} 1F$L4(D4 8Y>7ۋȑ/APDP Q ,E k 7PQ[O* wt)zz4\+K`W|H%k6VenlM]3Fe$|ZNc7mom:8t4%Y,sC$I)bY/>]נ$|4$MӧJN:9,َkDif:k%+~\+;SQ-8l(q='ŏ!YDꦰ<|,ﶒ—1 [hHZicQs.F욊e(To#X,ϿդX-ֵi L879xB0) cn: {Z70;O'zZ][iZb  ԇG&i3 woo!Z Un=MXLj\#?OuCZqT kr3KO$G~ŻI~_˃j=l|v@hffe_<8҃saBDb:MA;ln Կb43" <s흺_.dqmG'oBvY@@Acy5{1XmĢ2xr|B$vٵ?uYsA?7/-E5OWqi(`Mjͦu@R_3N YѲ3J"KnQRy &PE0L>M p hx V Ě[ <Ķ6O8ɋeoU<6!HWW]l_ls*RFV %sνePP9Ϥ;K`@RnCrB4B t--6QK!qYEOT0 tS3[:;ꐞiGȾ,Y}^*x6vT ;/;|P*+ê$ ]`h'V*b2[cF2٠yB7umքsmk-v"~V᯴JyO9'죽ׯ_OD;vl ! /UQ]-&Bm//'JUsl6_vpǺ WIB&̮ܸ?x=x0z:B4G'F?sxVy"1H "m.}Tl&[Qi[ƱܘZdyj2.7W]jz W9wU![-vM7oE9diE;vs:tCX7us1,8=%T``gɋOhܲx6!LC^N+Pg)JVδ{ne{݊jiStVg[^4^*_!1A:J^KUqϓosXgM<Efr hRO{p ͍g78Y7R\gH|G8D`;$PՖc@ wXTXCA`T 62C-KG?`"㕑48T%g8+'$dd1~vO~78b$z yH¬p#jM0ea `L{HZdL) `0DNTJ_IÓY^*4+?4(%hhNʓ|])ME%sNDť<+!`^XQFdUĞ`eJmDuɽ?1&yøRŊA He9ZG48g}y%?rܤu1={ !ɠ>$MԴӃې>F9F':W|g(ӯ}ޮ c]GXOl۸Ll* 7vt׷[^WrTcUøl]zpP˗bǔRxܘL3!~Cjxj ϦځZmFfݫ[#!c%s-4C@'[/>h2O&ԛwYhv0n_~>O׳t ''s5|3-ݛLh0'Q!z ZMLo<6m <|-@DM%*Gd<(yG{Gp|M%e Zѩ'=|ν $Q_zON=!B_jBrFɆP3A Vι3{hr?`¨%y-~~X\Q8{8}QKèzh cv_2a?IQՋ^J6PgzKEtVh ?zH5yش&%wlFza~hՌ& EL{зz7>NM7uhȇ2ܴMJY& hM5ݺRRu餾c6 Qy=;z7$qn{7л`I}>m,"MwrX_D)Enz}Z?'9g,9 oXĥ=<~lZ޼4vğ޼٩95~MՁUP_JUE-e/ ř.r'JK^2!={,aԒN^xx9ZR[j=%LҸQ鄨 7#IB/0`;#ٗlG]yZdHYod}PYU-R6@d*##2"3HVRꠚh&rU2h!ȓ:ZdX̋A&N5½RqE$F+YJ#1SK^%BAFABmĤ뜌RY 6K?b.rAe8JlHo3KOQU!uPqjvTKź!-YP$+^H DOZS$}L%DH: p`<眢/|m!WtռXF5)TI%˫Zj=ےXR?}1Y ;VDJb\Bh|YiMQ)&ծj" Cm+~z)~ľ$ #V$*;z?knʊοAz[(@+ůwռ7r:Ǘ?///~~v"vep[6ջ2h3#S/aK&tiMK!,#bOWO'yCF/{}N'{xmfή=8TK Rd/\$gYbc\ݯhnbўw<v O~},␉=~OѪ1g7CujFƜ\lH \ź?M;Ej.3 뀤 pOB GEX:HFzaAkOUTXm\$E"@H$2 iڢHPdV R"v G+ϲ%M}?0$EbK%cR2ZG%rvX7^Nn} #4YUxxl@N57^E(97NCQ(M1ž{nIoK?:o/XbN/|+:合+h03xMйF+ ܃h<-oˣ[gP)pĚύYdu,5^ǒMl/YՏ->EJۢ)1R4JM,:K"Y ?tZ Qtk6\h'hpU7Q9O$֊aJKĶk/ءfĕ~]qq8 '[ϣ6Ď(:5b8GwC8Pb:kUn8f730([9ԩ\ ~圊Tc /H t) BIhQby]& % xx.x~˃ }R{B*q[|:o_Of-99LYzjX(QUj3^TrB\Ȩ6(K*bgJG2M}vl/tAHNz5q G+mt:A SGD ْɔm|R&bmx(_.v)EBqIcA\Eb6Lxd QUƄtEvXy&̣w{3X rM>hl?`on>`K[gY T@^9*{[(V)[t0:8'As9<۩0NZm%Ηa4ж6倎)/#sFt:z"tڜ{Ey7Jm(PS3BiqXi#GqY] Nu1:SEwع7t>.Gr"@hˇнoFB;nOTL<ۑrrI;US&tNa^ĩߋMйSŠ傰cS j;~EKIb$:eݖ󽘧nF݇պ@B j]Nܲn V Z^ ZDcےQ;7ZCH r28wv{oga۩C-Oؙ(CS[ W%505f|iPoR?x2Qw'nvMrY/O%aݕWoۛ Kg'woW -aK`A Xbt'Kf^ bIt%X2} ,т7\sgl1~KkabԕEіC03HF[@W ˇ߇\ZS֋>M@n5K{ ;ij5:RyvHd^F}I r.sK@IG2Vm\_XAy>;",.7u3]F|욐Ģ_4[Aި(;J>0d#륏2k|` ކq:%kFL#ԿѼ!s:Cԟe ,9`, 4ːѐ$RV_$18)|60j~2e/RvڌS C oְikM!q)PZ§Tq'DHe6P(d^[$AVe (28r,|,lsM%+NKLLPt2hT[)Vv5%'2A2EJ,9hXUelZ%7,hV"]`09:6ٛ2#xmT41)H U2(9RTi,@ޔPt6l 3 vq/tx1=X]&˳,LYMNPKhz+W,V\L֛r0Ő@K X&g+Y"$cjwY;$]&:_ c˼a#+`llOg`-T1k!o]FV|֏[> b&.!@NwZje2߳^FK[lY>HY FKWQܲB(Ɛ&af=dmxJ(EbՕ Dd%yGeq컕!en\m7`xQFv:|oр ›JЅ\C^6>zre *kF͠}ȮO=Rtm@sҾDd!b'%Ȯ">x".PLP)6@2@#mX `hc0bN3R@l@822tjRm4yeὸFIF{eX_W]UMM6 ?uMqT`_q/?||Է$S߿3Ć#׿̔}?ۛsCO;{Qpyuˑ=Fwj볷 8fa=$ggo՟o suڥ=;wqţ8j)!DH5q J,x; zCy{qZ}qΨhЗE#1{mv dİH;TS@y-Av=s|Q)E4tbj;(_#['M:1#rVfal>VkGH.?'eݤr]Ժa&:}gFhz3[/6|hWd}sYaoήX[w9^}S!k /<~d+w4oquGvI!܉&[qs+pN9yP!d։;|K_m9ll4¬L<@du@NJ:Yɿ0$\2 @A PxO8MA"ρJ/6[b(uBW*"X爀ACZ?Lٻ6$W}ٝ&Waص5f֍>SIKc[`ܲMS)훰%5ISd&{kԆGtIMVfX$䭩kF])hdj!=g9oOyL _՘ާy?g?z;̫VknL+>jŋָt݆ӏXisKs^Wm?-ZO NO8 F#X2g]ԜO7xC),Z<16(tqlLU1.N.xZ5!MbFJaecXGjx=:`<ǖj8B)JKIcBAGBcE ~: ЧNZC{}j U 3BXNP'8ATyOh<8l`K2%3((V<:CJ I`?Qw%KwjA~ -Pv6]m˨V YmX{>Ɇ?%sތy3wmmܶ_8="_.![oyݸ݌^Zy4jr3js4-@pG̓ -?x`H V!>m,3;_KT?esҿ,N9-gCJ[W8T{:;l^u(dmx^Ɵ.z.\T ,Ylz~َOms"jm/+1?awɀ}ȉ{Ik 7|LrFk{KVR[%+ߵOy^e! )ATgȩ$hc҉vCveYPx̥(6)9CD=+G%`~1V_nXM%ylR>cQQP%c` uQKxEH"5s1l T]u1*:'PGМ8T$g\(iuq(NVat|U*`" q,Zt&˵ ũ]+fb$LyD8pR@Un,r34 )GPl<%F2aAXEj69"#9222ZBK8v)T) .”85ʲ:y엺Zs˦բ~e3α`[NX,gejIYz,XA2jYz,/QVt]RV_6=Yz,%(%J!RXZY-T4sc)y,;)K)csd)<2 ĤF,gej19KA( X "ՂqY*2GOb5zt,բ.:YJpKI5_JK cieh\ >oRc)MR y,c,=o2RK0XDK+13KeGna R y,ũLACK+ y4oO\."@Ҽ=Q+%b ykg9h35/)A]QA(i\lzF{oEdZ_a&?"Fn*M۾SPXȘ~ӫ?Li"D q1֋? 9bL* \JD/&7+9+5l#n~|BJɗT mMzSA(4 fA [/l @5)xa ƙ#dRR0)Z^辕 sUWwM,M҄m0 ğ۰aXx`9teWoӪ۟(Xt$AWDR!s31/0?t ysK ז$`Оht0 ^Dޱ2dž:/ψ8"[>[of쨖]/1o\nX-/?U5{'04YMOҬv̄1+MIK6eM&}snb>g{sbJuҘB̦J1H+UciyjtswG1Z sH>U-{KPWcqw3htz/b]eOS㽿rvCqpJ|?8m?et+ :hN{E*rV27o%">Aތn&zVAuJ&p~HѭOft!8Eo8K>ĞC4opZnw^]U_\_GHc9g7X|"d[qn1)AVYJ e}Juڸ'>{\N,@zMN dGn¦Khm]uES?S4b[:[[W%7&EgPrn)sGt?,>3CRsv<ERqh:4eo18FQVD7U4߾9+ID9ʹ^Qi߁UFodzB:xzmI8]9@;$8N^6簩9q xiPe#1hFDS!U oy;$;馿Kg~b(/3r~n@ \Y3Y,(GI30F0P560T9 0y@[Kq*d Q1J50$u4"e64 !SԌ"TcY=Dz~9D_'( CI1Foۚc40ì\#16X2J``1 TK*"oh ; DKU&)MU7 JQyRvsUꉛbqODyRvK7hŶ܏aX)GnTnIA=Ij _k]8(M"r1c$RnrL{-?ige׭p+/d[,vy-b+\)>zQ1O-1R|gUg2 G[ۘ1t@ K`=wF,ُa1ptm[#2r91%';A3Fzuu-oa:B 4OX3Dc,[ ̂S|.;^t++/CqqJ ZFR rS6CemzѭRft!8ERR3gy}9w*/H3+}U9zuqsmV Rk\t*}荜ST$chM#]n$7A/3pU%ek} d[cԞiKfugVfeeInuvf0aXip.*%r݁BrƧ5LtSQWArb|>bS?Vz.}F|&, f$^%|$Qbu>oWr*vƂAz狒8kɣSuv<;3kݡ};Rwr@zb L7ֺq [@  HTO„E%ZQ A1b$[ )N1vOa%6ohVa83bQjߧq &V馰 fpC/ҩ^,؏x NgOB455wZ 9BIdI!*ME(Xw~lhwN);{vaCGå-L^RDZh D|%YNUB"#JRBsQ)'Fx+?^Ė?!BAFrF|.i^?~JCzDSw4U).eȝO},}0j τQo1G)&R*;A`zBI Pr[9)_Ǻ'{ zDkF[ G edz5Z€hkZb{WiA:-LZ}XAyk4NDrC,GA ACuB{ThùFˢO(Rb~LdJu85)% )Y92M! \~I/[Ԭ|aGݕ/MKCXC!` {jb9xr&&XCB9#jE_A*ᬤiL /I)DQ9(0Fd7f ;vPsnHAQC~f 5PPτWnRpPHCLi6b4xi Z.BSXp流>8,">ub=-ylg|X|ym0{jCIf 3MsdحNBЂ\۫}q,nKP7\뇟vlS42m}V^aPYxՐrm0ZCe-FIPElJCf4ZJ-]\n_}nF=V\+ze†W^zlT̰J϶D 7^zv]^z~zpWKvI_¢á¢#V:xU"#Xuui^!j)tBa).RruߺXw hv-A)څSi=9Rt#h ѭJm!Ai[wC9EwT(6Ձ[ynVM*0J覆נ0tv&5^'r#m'SFf!ʺyfʰIe ѵfv+O*+ O,ET-b>:g {H$ƴi(KfܝIɌ+ˑɌádr:8h[b>\29ԍQkVt_wk[Xt-Qo@}oوV>T¼5ԝjyU+;n5J͡7V[+9}^%LV~.`=rjiNR䘪7ڜ0j}}PxC9EpJHTs~q%6퀳/EѭsvT^XgMQD>(Non\E(`ѭ|at! SA#W) +/oelwY]ݏrіϏ><8&>XkB9īOʦy_Xg;#'7J..=_}}8xw>p,/&a&_Y]Gw6DM?*Yy◇kNnV_?vql8_ܛ8Cݼ>?;`Jm敚>D 4S&>qRT^U1.e| _gQ.!s>4X`!8â,p"|1[9(Dx]|st"|[·tNn>~bŀ4JpԥIڀ~z暨 6翯8; .C/.tL4"q#eC|Gŝ$ZKiBKH}ܧ|]៭SO+MZ\_, g03}IXy39p(ʱǂc6o~y/!q𛳇߮ҟ?e,^Xy~^5ۇ_sO9/˥ߦ'̻l^<\}|S+o ːhe*j+XP1Xz)8WvxPz))dEHL‐ބ%H$IHIF=x"{ib2kW`f't%Dzl͡{i^zM6MG1|@ 3e~]H}ϵ2;}IѭR蓢8_jMuӚOeC`|u]!v%R#I &+߿ڻ~`-wG#k䇯VGP5sg}ӷ": $'F=AhE"W&s-hm)Z]e a"!P=\‹>l [lo]r ~*mY_V;LLl5/2 i #^]SKuuOwr"_ȴSxclNѾ97DI܍ᅍS@'wӡ7%_z؋&(LU!QNWTB*>ãud`91[*+o\s7gзbѰŝ.jJEXPFiY1k"z^WT{GiDPøFM xCK5/ŋ܂BQaLh͓\`E0ڗ..LCO~G_ɵw5.z\6$=Ub.zxm%o-u[#'tXs 3ݫ99C1f%)vkSo '{qվ1c Oj36~,9$ 7_<"3]=qIwK`spʵf ݓv5j{2Ú^k]]շxu[aas4ȸy诔y609Q'Zm{oc#da#EisQ5U)8;qJLWm)VUERx6GES :ݪ<[\TMF4i+*gBHԖȀUʡixJ8;BcQ-!Vmt} .P/yd)@Kk'f)N0OtgeKk%4;N,=FJ]RYk`e,&b'%cf)`%srÚo՗V:)9KՓ`iNqe,̱Te,N,=X 3 *cimA}=nIqKVOOVϥXz|,hW??73՗KVkbX9*̵%N$):m>Fsvc^?E[4FX Ђ FG&Pϝӭopdh#"1f7 ppY֮Fi+j0oGQ2 .& UT&ETle UXD|'fl _U}B}H۵y:sSgQ#WV!/׏%f?<݂L=%C.cE-uݱ.ࠏ b}VT- rУ.z*|;gq^.?@/7n]]f*̖?hl,ؒܒVKhj@  STOEs`n (iH8g04obءao_QJ_]ː ЎTP5/ K!LLGZ.ˬ˜JxF fSm04^Q])Mo0sq\}^~mK˦J$Ӄ|J_)Awq`$|CRp$іN]/*ɗ!0 .o*b?*>̖x?L8Ni܀ӷ_4|dvS$yuvgGJ n,Hl<9Rdђ0-q~{KcՏg }φS#}gZp[2> snݠ$N0OKa^IN聸#a{0ruʗk'LfwV1h~ 14Jrj2iB]=;bdu_a:'/{*8(4Eg\꼊VKqmwu$֫Mwo޴ބ|_N !, rFleGaepqV]ڛ$ZѪ2b"*+.{Kء}=Z`RCo:Fȍ{٦˪al۪>zW%TI6[\qٺ~j/:&Nj$V $7_O.dtR 9@kyY^X G7̵6]7 _g[/w hHз)<>b Dծ/MѶv[QeGL @5+yC`B7Lm}ѡJaE 1rw\;lWU;dbRrύ ݔAҔǴ eL'jpKýgחƴSa¬/>>u~>h4_yGy}^s`֟(o}Iҿ⮸H?ZlX*# '^ Wf$v%?d@ ^@^ؑa}{ v~;&KI5iۿo]5&zX}&Ǭr~JcfO-0ٲ0ETLsʍCd3`܊%U?t9twU =)TE 'WzJ'x $f(*e/d +Y[hGUVln yËZ{)"G]g 95!gAy"*_{x!E[qR,]|T!xeZ-g/FjTX$`Y_hvr_麨n.1]99O鈳^ *fI\oMIW<^>dnqCj3tYa CAd\1gɁ*1a!sH#hisLs- {JβAؽ?[`('ᳬ<)PrH:fE,XdL3Ldy!P&IPJng0B @cQWϫ1wv."U :`i$hHRHg< c5FA`9M1 X+1҆[)^j H939u! . &I/q8iZghR*eJè.Y^Gr!I=f+aR}2 ;f];2F P~֭Z Awd fFkRhyFZ( #,l<3bF.`aIfJ1Qla% x7E5\s4.Lo){X@lb?=)*\8{h` ηޜ"uZd4v;nnB a_ܹ5np|\,7s/e?ۻ$o0.|i=lmgbrVW2WgbiI[NN@.-4 #R+.v0K[{O a62i7J,q _uJ;޹XOF'8z8_z 14ӝWոIǀ8DEx3h J8 M htMSgb} ?Ɍ׊mضY@zOFw;J6\]@ˀe;'&ǂ\")_CY>d+YC6 Nr#A-d4J1iK4Si= E%YABU h[`,mI&в92O93( L$$ax $ى?U%^Rkq0YS@ ~O& pq hoyf!_hK9 PD#Q+M8lzA`yQ Bm{QYmUfE"KHFTBYyȸ)mhe\0vS-:RW `M9Oj3fΡVB; )K83pN[ܸar*+6v 2VN6v6 4nѕg(W=w*6PإV? |GLz~;jU9Eio)J$ _)Jku뼰^.bXtJ+恔rny|O7."qX:qRqX.Ч%7NO^W}~W>Jҏ=|#W%bo^o> |kO;2 Qu^ԟCFC`YS}m ([N3-i+l0aR f-9(tכT7Bƭ>7cϺ0)zTJE9yynǿ@&JWJ'eEZ-;<TlCdggxqsGÖJLA<(S:f":BT2[sPwmЫ\_#LY*BL_ T3N-x JɈ\ {"v ':*Q 8,u 8a jtwNɲqsO#RFnC)H]q> ʗJP\ Sgb;}F qNU_2cTl5NU}"'r$p1ZHK6}BXŅd<0)S)몖-ޱmuU^ugCTG6S8Slcn+sv^.(޹(8b}c> 1+ n%pz`́I; {ڳ\x֔_xt_K6(bY7Ob0ɫ_/)\Яutэ_G7~Trl:Hm{s2/@3O3߬$ӦHx$7O%.ٟb|'k}`h?Lm6O}bW_f;On3C|~yca&Қ=u*y!5W[ RkcQv.,u3Y_BD2VdyN\*LJ2)\ҏ& 3gAd[Rk!mu[7e۪7$)2[J jBbK'v*_hO9cΤa:TvFKj%_c}~tB^Ӽ~˧."D+dLfxlLH!7r5QJYt$&OyN@ S`)y.Kȹ(PQh%OqY9W3O0OJ)d)L#R2}Y%c6F;BaWJz[O|b;!H Q9Y)[:1+@{W! f2wQVmt#Fܐ"`a4أHE&ը$kO:6BJQ :, ҹH%y.gFH,5;$IwЖQ{̦:H\8q(si!%S0(dM0:Uyç`-X錥dE#^W>"LYWnߌߥeࢯA>_4'}ޭ@AҖcp걡,!|QNԕx0[2,HYIZ0O| A=L_1,l<__+4n[O~ "ѮQWz'`Ȟж#)[=R!w^/`G6B!ӽ8nBFxxm NkRoRuE*ɈD]6Q*š?!}hp%%mO":#V.i¶8Wt+ C<+-\|q[)<+6KAXKT_ԩVNmcsaT7]4LEjq[)cyVHRM2gjFiȭ4`WDS-%9myV $$ayVZR-;8r+͌KAju_(-?kkħOfeJ/j Y{4{JfU4qcϣ,f&7 \J(?$/>)#?}Ӂtjntm0U TdnXJ%ھJ)J,,oօJ}NT_SZGl²Jdsy_K*AChB$vB+J)>$n:f3` PΠq (ZjӇmU2" <Z:|Myv?t1|*h3t cJ${ jVU3%g<ڟBpZ1ٕ58~E!OQ[#geh-:q*9J CJmߖKҗ#ZQH+'HzUf Z(*dꧻH' 45pZ6I]J*HR?jC#upd@Fjv>9뤯f5Վߥ,A0!R׎j|BAbGpn)dXW)h]_XYۄ5Z q3BIJ%r+S`LdQp"[C1iQ!<")yDx%8"X>DNل)> Ix@ X~6\sxakQmf @#[TgS:6mz\"BjQ[,)l5-d42 e~LGN0(Is<\$G,J#P>e#J$$Y|x3(~D&߯?;\ޞ1Đa50 IN?Y%\O>\&\>wt?\y3^}cP;a`WYeWS/|N_!3a i\.ެUڏriO5O1-G0%Y}u3?ޞy:Vqd-f,[Pͷn}H=|!6jbӶ+R[L4D e:@t Zq;XDN`B|j LƐ^TrfT U;\ 0\1QP@E8_oڱ΋T~hûm~42) {uW,p΁KmЈJA,P޿J^jA>҆eJ+}eMc0اR`Aug% JBL ƸÐ3B2x#wRl_]D H,vޜ-14IZ1E #) 6]>QaaV?L uh&7m),2ۇa \Rmݰǃ Iee>|BT\?_d.Kk5+pûX%yp6`ٯ\s@(+S]ttOP:;q-9yCsxR9n Hu.8SpsI Z&?jLHN+W*; dRݿ._ 9R'Fk럾b`|dٗɲ/e_V-{;jF&h#5N9DpOE7PNzXq>XE烏e m0I&|hL|ڵ~y:hѼz-m _<9'H0(-+!Er.P"8QH! Zi•D'2K'Md.սbٶDJ?`*L1AS2 S\6ağa{@y!>&¸!ňD%<~&U$2M+A@6K⬉xZS]T\;_x`Tz`/ Ctg}yYX󤭍|=ȇ(*R^%oT,vTFxKF Vy[൅n Men 529v=Z-]mImL@rWrոNiT T7)EfzmOu>} 30YLIrس`N_p%4c>h|(Ep !"4 2x?1H4/8F"58Ҭ[{Zٙ1TaE:ԬH[KIvHY1poS,PJM 1 gu ]ݽ1D~G%R3D~M,^,$y]K_PfȰK)TIݲ* 4l,xq0<̴s\8\a%:'f-6T8%1J.V-͖m;򃴷áy yɝЦ%No};wƯJDshsk|v^? ^7ھ<<=[1$W4w30BΊUyWbuouPoO+ud!߸)~%Yn*({nmec:mĻ1:q~`oޭ5ޭ MMUVC8`ѻA}G,j{nn]X7n56ސZ)o%'xLo#ΨϮӻ7o(#Jjfv6٦6 GF{בʉ; }}B2}]l)CrS49gXRd/4 A!2CPcDvX…tkoә9C5U]ٞ%Y'4B kL]̜d ў ~%mJ)O)iYbD( a%bC ~<}N~n08Ov1A^ ѩVCvwZ3 Ӏ ZΌR w. 9o+ƫ<#/8yxnQ$o8j3UQƁx5cCRdF ;E V悭NQS0--LoNI$7)iHT* $TmW -Ρ>ZД,|[KR9 Um{!0 ]Z*sG cmz/rH#dJ=|ɜb%{G1v1dbTrq2LN+$7" h*'cʽƛ/c 7{Q'Vrl*U-ALoZ_-{ ²JRopgښ븑_9lju+Clf˕MZj6+u=\cW$j4hM{[EF Ko|!5|η˧_h\UJ7^zT)A{fggbmcyv(ׅmFË:"΀uހs=y~M{C8 i[oISo)P^Z[VXmuUY@ڂZV9/r gɊ3߽ۘh%Lj3Tc&{20{AEp',ؒ L,+T`YiDSҵvJl(A&@1yMBNCG=3j2$r`YP%s252$_cDպeQ8@~aN#Yh:k+^J&RDy\Kk#}>z"mTuKNPP@^:F6eCsoV Ff^ 7CI76Yƣ{RZMjZI HD< Er}z  B# ҍKJ`ĿDKP8#<6RϹ!y; K;锂d QO)H} E-bu-CbiXגhY_u]K*8Եx%hٯW.dٴ-Pg&[?nv{ope4~mMv<梽ڟnWI~Ɲcj?="ó` noۅduLɸ?S:>Kn5;vAM.J\t(EC)F; "QPd3TFk|CqϷv KD)`J$0 KO'QFa(J|GjNeTCJuJ9jCxBQq)nRYҍԖO%T6cRs8/=.Ԥ).]6JvO5pT]+u i1=Hz%.p:A>6m!1I;T&9Fm)L}{Lb$ibxJ3'qԮ e|xNIXa co"!Bg5k5X3k5-L%IoREYBVW_W? (VN|I r~}Ʊ< 2сo,P.rԃ*|^ok3>1k\F%lR`2:׆&U8E7vtMXm ֒[Y)]&Cдu' k::֚:Mz]XYu! dUh]2vkzڬ#Hq6HU) tJS^anT&e9 S0{ .DIn*0wtxs lC[e2tf-Y8$rYQdXby͙0(0uTikOť伣CCVISeQ*P&Đ9 7`Ԧuz`XbCX Z -xZfentI–e榹fy-d\ -K *cùbpW3@Q(6QKvDh,?||?OȕsQ W}V0[M; 9QɰҷQD\"+6f4Z !ƅ3Gq;ўE*g{y%4- %nW"b#'Z9Ѥ #ng֠;k&)0M`'r8ڽ7ft48z_N SJF4wwskJ./m5FV#4ݏM#S. [bQZD)طVm%+6˜g/v4׺ @eOxr fdK4u.~/K t7@ZF+`d缼)䩢,bD;}kRkKk]Hڌ hvH1d$,zCJo\k$$LmȢKʨui+hXB%Pi 9ZۼFhuF4T@r|]= Rsi=EKTxH:3*S'Lq ZdZ[;s7!Iu)A֢-@-6d[gy#E2'@0EU< JZEi6(-IS*s1F,,/.&+;Xiu+,9JX)n\u #_:㢹;jV?8V3iMGx\:{.9f3A~%TU>yՙsT9L#Ps bhrUvկ{Rww?\W0_Lqװ^}{y׫?tqj?]>޻'Se)NYorr݊8fGjQZ$Yx`eIZ]].>77O>}Pd%Lt+\Ĉ3[پ|[}خ H@{ԸFآ^?n-6ۅX JdGڵ䣪R+a#Qrxw =7!2CԓwXXk$xG6ͦx*tzuuss{\2ڭ _yKvEՂ֪w|`Q8zLm Ne /)_?gY s_q@CDRv{sG@ߦ'@`&ʵJϦb`56b?v|HN78t7يFK> J KK˭ONQ6fN7I]+>fZv/:A1U;j]Bphwqֲjxwv7o0ӣCy=`XԷ98O^ b#Eϑ*N9$~$cZ<#o;%{G@]2B=j ԢHJHxe=(FK]+K H5F\v"WqvvDrrwI3ٻmV܎AFHvU\}Z=?,ù.on|⢦Vuu~qm\\,]HuØօRu-߬P~c#ЙyyU6?rl%gS#}Cf/,QG[Ƨ'4|:4nhgFeYOyMTiwDsn:N7bۄ:gڻ]n} C4 S>n9wKAtRݦElT<z>!$e5ϩ9I9Xr$)c "rܵ:$cRoKM OIKNRF缽- Pz\RpJR)P*0-a(2 ,9t( c;m^cRoK^.N(]4J1pGhRcfR \Y=t(5g"ȗIwʏH)2](LwQtQF®Im֦2g}O1 UF4I!Iq^:Qq^:zMcpm!8nѶB*_ z> C:=;QAq ȅ1PXa 0Z c7w[ %;gk.8h!!H^bDiH-5>KZ1Gaǥ *G4'v7sT-K% lت3#lzh@Hqbatݝv0ѭ_^ ] O$8_)bzG*Ϩ0j2rO)<=^^N R0ȋ )ސvBNzh1BްtۼnjT?4zpAC{ce_֌O.8n=-8[' n2?g Z]]}On ~绻w:8cn"+Wޕ5鿂׎i|Y/wv7<1~x]DR( qT/@bOweeeeefU(W{g?" 4/|r| eì=o znfm}J_iXE>P78⹂ n(|Yދ·|G(|=etxQa^ vLitFJH} IζG1"e0W-y<4M53Ҽ_KHpLB8ӊiu2 P.kZX9*5 . KH5IZ4a{\]?@Q}5ݫ݊ *~6 r{5Lj9DT KcW0q)0"0!_(o ;(qФ8M#gӉw曙l_SYdaagDԹDٗ/(*xpw#8*UjMQŁ{D6*}i!QiyzRR[(dpJvv\?Ʒ, >@CPz < kqbOE_Ԏ]dIùz5'u@@i{Y_ 7w wf@+Tزݙﳏ=4XY(RZKχN?_#F5(2i ,>.[ 6^/t"DU)_5t@'Np?KCEI ƏϤ>Xn&Oczɶ8RTtU(FF( B|FFQN\K-\iqT-sY8` -`Cscs83\s“Bk}eW+QݠQ9GʌbhtbL?zv" ?>kBwf^`fk I<Ȝ3-0y D8 R 8#|b4/yA܌C7UXqwP?*jfB6*^D[@ū)0ܹ)ѹ3-7=6IBzrM%hQ A*w EN8]tKߚ!rl=G&4xSh"tL;ã,L[qJ;M Tvq[E-F`;sꮣ2&`wnUnڪ*Mzxgn tף[[9]3wI[i~~m'[ 3&+3i.Fu {"Y2>cP1A]"rJc $cuzL#Qܭhr_KE!ƣk48g}O,jJ1Pvx83&,-%R8 EmLg3HRI9uWύJ9:^; sK*019HV,h Cz eҢ8JscE0emA?gşiUjJ1VMgFm Jq#ND>xjxx\ZU`Y^ivͣL >q FK(b=G$uZxf&٢@8LrC3{xX (UQ2V `_&KPݩBSN1=CեfSBf=пȟnqa|JG/Vi'$u[r1A+؛mx3ZN2 gUTs;VL`qŎa ,-]S ꗉ]7gnS7YP$U|4]GX ~^nCL(&sʥj||aod'VKVY@vj JӎnEW<'Hyo]s;9epy.odN9*bs)"#TP£ $}GMph,0/ [ Hɘ)[HRC¦eGE]g2q6.nDX*BuQ{N]xo{i\} q RϜ$BtooSVu}$n;RӖU*vûR_ޗ}TqLq|ӽ]V1 JH_=Raw5RLo)pQYPukӛ%ya zK:s,);mgzEnU Ghpzw{j(;/8g]f]oאcPYM%JIA;˺.g^Gx!QY5CI0 Mi/zQ(#70O#0g}V4.w*iW*z{K#/Dz)!O@ 0aFrilh;̊{^JeʛS.Z@,@5\v;z+I|C9pqRo*ГIMP[tj+bo2S߀3v]ožKtZ}+{` -"&1WTv2OQ L4xaX1!s %5X)v۲8\iDMԞ$:j~aYIh:jMP [3Auζ a`9:=QsbDX=ei3Vc5P+'g ߗ]92qrP :&j[5F* {KH[{GCN(7}) VNpk{296͡[%=[m-")U >>~J>^>ݻob9}\t>.9#˃'5{u#;3Q0%vg=w-$R,d;InL<8rds]hX\.X#L ,bPH u:0ϋ˨?5S}wXsL(.,)>Lfnx|Z^[)axDQ/B..>zu3|:ZQ2z,a.rE`Եg~mrl<_]f\ϗw^-[L۫g^>j^BUQɀH34h,7z*osWOĊ>B$r! 8J'j%wVl rs,x) qp"ֈ ؐ5&i,ךcĜAX@*dʥ+8UaF6ג^ъ]6=l7S˹8V O)pyڈ P9[[^^ς-L\u2VܕAcXrt.\(Fe\x~}} rPTݕσqX?'᧫͖ou<_kٯWtu;>%|EF?Ǐ'4ֻBnWB#[׫ooH{0PtKDTWTpލ~G]%!y(|i(65҇ښܢ imN{ń*G[p6V EK[K#HF]vq.9~)?3!*%3P(/+zi’ݝ^b Ltu,njgQQ%_h`mvY |u}Dm"Òv$YR2  P]1jk|D=1 Rux.F:+^gNjpg/t@ѓkRw#6d ޑ-iqnwO^e 2rڭM\CXҍ l犚}hE4)/;xLE{(Ws.Id7~nYԼ}NTQ٧n2}Yjv+@W.݃;DpC~.V-Ļo£[{b"NkaDUm/}2@ ůOdbK:?F9"sV)kN ֨ L* !Q}aeQ+2al0JL%3 v|cۘ1)%Ѳ<"8>᳜zgBCvt"֒ 3E2mC`Ǘ ɩĎߊ/PToö]]YBo[&e5M`BQAws8bӖ0EBtZJ-g';~hQn!yb=>1B 5IzdŌ*Eci*üB!ۻo@W4tX3?}rˎo7M W%G;* VxpO%t"7f[P1pMB&ܕ˺\!& t!)bs )P8]L"#"Lbho)Umr#E,'`I^Rl41Ahnι9YܻKf:I :B:7KO` y傑K : F֒k&LIŨ 捰#LۜURcS @Y `ݱeFzPq<y Qm 6AG[ Zr'O/n&huk70tf-MaqHCGI>˽R#Y坢Hd:ٸ"{u zBJٵmBNU:?MSTպ`X9hm~u~&ݼDOo̮]@(\@yr@Xu.]+s[\ݔLHifg>sL0gCPD|>iO2-qe2b=-S}9 0bL6=Xx^~F$i]ٮ֐ψIi1Բ5Ctޕ)*_}F(cKI6_3[$7>\&LZ;qF`t칂ET#˧sbOW5 8W<,,~6F 7ј)1+ !v4<&K-YG`4ȹvB-y9'YG`.(Xrqaq_~|-7G11Ւ%fS:J&Ѿ˛X:3z j/ Q7o4qf|G?:sZKmߑP֌t? @=Eg"27ۯ?1|t1Z &c9m\Y>%|? l{۳-}bwaP֯lAJ~(ȱkmlI49#gȪjt9k|Ǯr[g.dn% U[fvgTcd:&_G ucұZ8…Ӷ[z?柿y^M?i+Co<~}zzKzû\S")?60hgfrg3Glgՙ,q3Drkfф$k"b$ZDBQM,@s,y~FOj  QLq9MP.ab9A,˞ZфcCbdĘQ$ 3cAe[UEQp1ѥdqόCCHL xsJ \p h3%L^fv6 0ѻMLVF"md1CEJ!YSIe`\c*:9{Rqpaݷs RhϳjKI 2J 4!H&ɺU!\mh5|\V&`y nr EB:B|W rR(sʢD`p'%,£C 080Y|kށzR~d"jv;M:t*KH=?EiྡྷtWHнy:V1O2ˠ{s,+Fp#f tŸ8˸Y1HJgBQݹi.fqUic#[Xd" Mib(L{D<6*J+)V]`T}Xs?Ul3S_@S54KȀ`l3s 9WT")}<2)>}Jg+=~hemBQ #汎bUpo~ $uju$3^6] $3D9c#b`*#]mhƚڕ!9ɅPDk:of4'50< {8 `wWof@( mD463ZT5675 NHL*e@d4RڬEUjQg^_,ñhm %sZFX7?(QPL~!-PJ7>,9nJHz%v8L$Uvu,4z p~mP8}$?jQ{R6G(Hk;(+angzmKߢjWgu5z 0l5nfLn'^P̠}/Jrbeo}=i{U6>+,fp0>mSnZYdb갔  Ɍ ZC Ea1' F`jcT_mL'9f2VfCc{2ͤέ 3d)cd:+!eBB*qE*Q܏s~n S&Tϖ17}6b.bWnaY˰l4,%Y&O9炨ʒ9&8Utj1qyPOBܞu={ Sg,0?%XSfI GT258ΖI]pLc%X_:dۆ2ٶ+ĊR2Cw>`;yVނl &TY܍epJyPϚSdfcDHVK,)t}f #jȚ*O1Uf q--%8w'w`i#f!f,!cmmƳx$LAYU  C<TJ*D.d2b5<7#00đ(TQ3Mu;\I3T7.iJECgVN.8\i| ɱ%R."-dl,;#A@ZH~F 4 uT@3@}K92T B:@$~^p)*h!^, slOK*ڿE6ۋ#Dp؞ ${t\YjTS~6z V<(g9AJQ,w)6sD4H+m+\L V$B& 9Au W%Yئ™r?Ҳ$)BH3o hjfQ+Bn`5Л;ļHr&dMqK ݄VI-H{I횸써dAe%t?ёk1-mC1O:Ay ֵLkk%6!0]G )OUUX䃛`@|nORv'P椕⅟_ߟdBY)8AÍpdO3ĤݠI=y}F޷`n2/-0klgkΔoiWh7!^8rY  YO_?F\NjBn$MB ĠG1%1XIB$X{O<$N=k p r=h\R@* &%;B^~_ ̹SzpUf3X[-;J\Y;3.L4%.pmGOU;ogO40$+$ٓ?/̧I2)hr~K"?ëG(ՅE[mŰf(&$IJ!-w6_O@ \'#>:"0cb-D2î A?I@8cTo؛.D2c8Zs,4{m-ȘZӖJ/͢Łs=7uD]}MoL2 OjjSYZ_gw.y;LͅWO^/.L˧;Eo[ޠ7]>{KWv|n]]Aw{o?^8 ]瓁&Bp< GeֿzC3 ``Nn廾 Cd7£ܯ\PpyUuיe@I Sw{kzm>ygg%'9\6<$]E5Zs\wvCj!k_kMOy?̴;:w 9gFnP T7nm9WqK㢚5|^dflݻuǝeSJnUl3.b"aa.\g5;z{óؤ9J ӁIlEQZxcݳ*2Z=;sS{6+u 4 ;z7e@PYvʝOB 1p/;P,[ ?ggd}͋Ϋ_1 J'ف!"1EV "GPר_FY%&epL߽W?tO~1 =`_HZwm_]wS] X(*;.5Ob]tcDwcP˭& >R+[|ܟ(RstF`Ӽ̇;Od=g]]~ (\K|=4>݅0~QYf7O{҅O>| BU;'xY^r6QF8Ǐ5pV8Uƻ|}##h4ut(j֥uXXƞGd!jCē8b(XE K4x Ol I.TAn] ڀaGES N@ $ęz޵p֍gJ`b+:lʞJ0ZVr-јevxb5x*n>_6- nHĊO⇬\ #$뜏H E][sF+,=VΆ\{fTʱK+7/v)3Ŭ,i)*9.W$8 @Lrl]@ѷ8ΫK]K$[W C#-\\1描,w0>vdL> (Y@=),&4%1gn=D5b֕c,8^cil8\wN?N?*8:\ׄ\SMhH瘞*0 N`é\4R:ፇ<\Z mUY% ,_?@SGS;/t[Te^}l:}p=g=ܣ2!/XX$׏}9@kx8zW't5dw|}|)f?+'$$rYޅbǭo,Y9.f|5Hc~)N91A(]#1?Ra>+0ͼP9 V:L1/2Yp:bsS7rƍc& Nb3;5I,[㪂fbWhOʀ8!8"5ZR)sዜBSΕь\[,JH:D{R:ĸm:ԚtHSy| \u4-xhȦ[kR_^RĻMЕ3DLxKpx92xE#g! ˠ3=XG{ #lȭhP҂@Sm䖅xF ; ,b?PIۦy[ iuAN9JeQɃEgTI@Іs.M=6ь9na-Cj#q2Ʀ Gfx$ =zݕɇKyxF$ 1g'Ldbp ތ?7xa0$lX=/)i͋tҾ*7~fd^O>>GF21ą\׳gf3ЫP&t~܌ p2􇻫+W\غEMCR'H B+Oʍo:gﺛJ5K`$>Sm87O'Ir 0P[^Ec"wYZ_ WqOg6x:(6?-a씹}R{V҇wN/ĪG|FƟNqy6M՟oC(9 Gc) E):C08Z+0WsM8L]J9x@(lFUT8 pP<`rP8 S.2lB ;ǟfn).ǏxwB'<>gG;qV/ESՋ ]c{2\O>yv{ʿt+J<+-ev/ F wp#^(|y_XKI@ r ,X ;d s86yǓ |q:%>g䏒~ y/* $f4u|jjYd^OHٹ|Yq&gB7sߝL d*?=0 o(/w&&[#Eio1\uʞj+{嫭*> +CdӬ;Ĺ40x3 +[ߙn}yũ%]u'B`DOMQ.4zR`B2Kh{Ҍisu5s$#@iY)툐fIfRfɥ4F)l^{ UV,&ctJ,ClA2kIx_E႐%r_&vd:%t9JWt.-B$LRK h۹wo@}y+]'PJl!:hаƯ#!24X@I~>?Ìl`\S:M} S1C}z#vr_5:wL&A2 0c g"wq Lz 'J; 0Vfa4!B", ѹcMY`Fdx+kθt><™ 'F>o9 d=p愸q926>D@1Qq#yd<>}u\[ xNIN+=©#-Y2h=uS `"Uτ2,x6ɡm"Z+^h55r}HDZ6-_f1L*nRRZ@ k7:VgF Ҥ/Y@Zs?@edE%{=`."vм}6{uG1W7t$Z`& 5#L`KbBԠ`&Aaa.ѵ_\f]{O%#JP8~n*=rűɸQi 0]ѵX䂠@A4ܬ+Ju#vHF/]*jks)'Vfp.GEeC>-Yih41`P) *#C@+fL5?TFOLg4+UGm%1yuxlU? v)5e˳S.%]H4dQ4 d-ra!"?1uHj"'00JjghJpWX^㙽2GPM#1 Ҷ`i! * Ǘ |2L=X6Zl:mvϴTsTRjvgRCPNߟAq(GFV+5E Q_oc(ݚqe/'OhfDn.+7 'ݴ~C7-8ܛ$*"y89HZՀ6xj&ʸq6yZGk 9ԽwE@7Ҝ"R6Dj 07+hz! ubO?g&@Vx&mC!rL0`q-KTLCOT:OI2 G)g>$T`FԈq6Deb,fYZWʽYHh0V<0q\"}w*Xc )Z: G 9Del)m9z:4,Ϛd_VeÒ;o_dE4b?Y͵ļF#fcQ9'5y1|X6x0xYt]<V2(|2؍5 = MC71R|=T?:5gy 0n%# J'eT7 R[=Z Ցyyf/b_-ts!heBG塘uqg_ǜ~l37,V(J-6LW^~xNokr-_a2΋#&חeCXydx'mz%jaF7021+ps6V)O*=bjџ5$ }Śжm M1jOi.dOy2q;DP؊"tӷ7!9OU@0jWBc5s&G?IS_(~b 3*Z98aV&|)g&B؇ŷCn!GM7fϵt0kbDZܢy5q5H0RtPUuL=ehׅ"jMrg h yQyu\OFGÌHV=}XMhƒ8,1'B1<(CTP- 49k f sXJsUk}09itniw r81P} 4!,D鬕@<m޵c"eH0n76^dc;AlJ*,7ұ߹v!fWNÄ4 fsì•HGFmBuhQlQ5mMklY#D;HZo|< h!NT2AU0y *Xɫ7ku 6r9aaTcH`^WbIL)(IR3q0B\})5́6rY'AFZ>I$fBs-)?G7qOJ1H16rۄSL݊;Dl8#FR RL6*Kw45a!߹O$Hx*LzfOk>V3%xriJ(,.~-*Mƅ&v)Iͤ/;r6=ɞysD^L6z` j{_Kc)@F7 ;ЧZ2ooz@wTTEv耺jTS{}ؤ>߅*6J%ʹ"Ԋc>jg6ԢH:y ytޥͬ@2nxTs$Q5 cṮ!)xTHzEB2dRKFlư|R.Z/D5)9 $=4IW-m11u~Nt"c#e-]xjSbQ}5ډeV`><9u^IT|U|U|U|QWqR8;ÜFc,bڳJhbvZi*\!fIK*"'-mys$E/ IƱu7?%6`\I.86 S{n9-Oc‹~DR4xb\e0X~&%^(rǂkRϭh@Ȧft:gHY8M=OJVFd1v x'$X)Wa˝bN c& ô8CXnUEȹ_Df߶nQ?uӋf4}lϭU?=| UX< Bx?Ko!_r!.>ܼyEKDۮל-wVw[{Ӻ0ψRŇ7b&Y)̕6bj0Fl8Ry-:+vl} T)B RW{?etw#%jՇ&c=kF 2hc+%aRr8`l4La-(Jö 9чvPmC/DАh$ĽzI `v ՜haZE!ainUFZ{Y#:fzЋW8)>b->AaPI Uowaƛ,!Q3kQܘ!aʓ]8( fsh}!IX<'xT` @OCL$]R[{H}~80Av$eD'Qa(8 叱; BĠ5O(.C nl< gyMZ{P"4sy~8ܺ8L۸+e>;+7y73!4G0b.ZZBM~M u-&Hs%D}3]CB+[vj.pS%{|Q"G[!#K2GѵxX8Br4*)ᛇ5W.ύŦ{d9>Mc2pGb b@Fm 3:/'a:Y/_vgB ?Y{c=B]uDl\,1¸[w wZ)A;G.Ni,5&;{-qDA.]j(f( x={;ߓY. ri9Q5 } }a!߹ؔ-:NG7U]LJ1H16rۄs/@v[lV7Dl|'5:VA锶&\[ և|&٦Ì2ؒndI+H{\̀${.6g+~Y|y>nn\ă̴}ͫҜm>Igg۶6%1ڭH2A`)Y~|@u )!,?[djbZ}b.}|t.:漿} v 19sX eœKɩX ᥏G4`a,&( :[fZxjD}=;[2~5)K:;=Y/Д%_j%E^tlz"xK*V%'jJhiLEf{Y6ӉG[SLNSkmc~]@$J4 6__K<~k[XEWwW=4`EY"zEuؾ^~\2\I#++g1xc,"2yR &|k5qby5z9$λD ^l7bT=ӗUq͘pŦd6<[a"TY5 k2t*$<MCK,IR 0 =R+@<<ȓhwB)Bg_Q9(H2D%cXy""XkͽEFT bWz P9S !hvP6p󏬥 8=Ч8brggLw_T6K1f =@jE0dqmK9_6G$XI`P`{}_ W1W,DDbN +Awj$\LePfh]ψY$|OjKܳNc:,ny3 `fu)Ҁ|zߜIC1HdX.+mGajon׸1nLkDE&ʇત"DJ0hFB$DF\:-Wlp*˓xw? @uCfo_^o{XϲmwԶ˄\h3Ov0nTQ#}$T(g∍~7Ooy҆-o^é1zb?dv9B~?>Ǿ'I4 Sn~=N$uHG%fx>3d |1T/gGJWmgy0D:̒"9ޖYO-B91ZjSyFzq0F"$WUbRcFZueA~gu $Teˏyz vqc㙕j^d,H ri*!qpCYFG7"фa.5ΐRH"H/lCJ afΛRc!J1Ha)m#j+ DFU>1K$Bs-)f?E0Et+ !,,A#w4ij MMŋ0),IǤAxjY:n懳ҔJ?+ha Pnicΐ349?%~UX7_>k dw{c]lpOAy* 5ڒ o7ҌZoyF9]X;. 7^9\b־wsX 1|j[zn>>sV ں|w>z>21//Bn>ۥQ҅IEL/&:@y3sOg.+}{;o30ؙq 5h42cA[8Mk{pۗvFKz-oO NgPڽS1g)yht>~rf(Hmݷڵ{h=[aRʝt}|+$SA}3UzXoPQ@-kGWA)rH$ujyf}stM[c[^|:CTidNGwNL-6j'.m]_i0 #ݏ ZuVft@gEg§c '!AMc9ͻc3( \~9wԼA?'CfP8>7>۳Ksm,^Nb9۶|";c+BQJ2G[ $`HBC1~ P gY2cp!0ҕBH­ŽQ$2Pp+Ld!^:2RmHLKMdJpV߁YlLn6@H d)v^Z/ 5഍c%rT J)neӅѤWD‰Nvd?*yS0tQ+!0zT{}/;||aq_Љ/`W֫?{낟?{Wƍ 3C ~J=;Z),OPptsܗHMb,P-v2H$2vDuP==_FćB,0>τB+iZI{UHqX9H]J I&c@<ΆOJ^"zz._%@)E\7JBJCjPNuHX5A|l5r8Zc'8|Q-|:}LZDV j2CO8|u5^Ɲ2j~I[5=J JcS_?(96 Rwa_!2K7-Zu.]%&ͅNrPkXU2-楰J#)V*8J(džڠ'AqrRILa -DN>}xc)F!IT !ZŐJ!11E&H9)KeJ!kMcnR?NC.dz $`$Zwo]ŢI(PumֱZI8o w^T{kNMxNJڧټNi*'ȲRVزo$JtIQ9WBg]YS kЛ;EC.V[(P'&DChgISSYˈOVQLjQvs`ofӘװMa,bqSL4BŽK7aKU(pMJPӽ}D^\6Ff> ){_g'Zck(S&'jiӣ7ίS5)}]j{ ljDjwnnco/YbElo4#uK&7nQ58ZWH-^E,KM 1mOn <uFn&U"5 %);@Ӈf7%}e$5ܧ턜.yv6n6`5/>)wYӁ]Svu*7 ww\ݾFTg0t]ou4*?W2%fH{6 qke"b$ZeDgBQL&@s,ۇ}hv1pdㆣ #%/7,I~CDq1ex&qQIKjj,2rF;^ nl,l:m*^8i|JXtF1NQ$sK ڮKk.3aD;˴XX=1LZ9r:J2N9r x8k@5 .8,Zp'0k= [L d p鉖*Tk(yuU e" T BͥV|߾=yxǼa]ūf8'ugOtzi ;Z$kY`$w7-E1XkńƚG._o!YTF oG*vt-E$m]MZK1!Im\/f/\:C2"=G mJ9_F@2 `]oJƈNQQdƴ]y;' J2{.ۜ38?^E9g+[q!4:*'&ԉpShXNm/Qv"Lt4c;Jf Z%qew)aE+F]?p,B 跒5IJT<+-p'/ᖛ]=%U\,sdJxyf$ @C fhPjlTFV0z/+@uҙDkR>4VA*YzQ)rv/E"7dzG8 p'E/~ɧqGљ4eT3M7"ODۉ8CJvYx.LKґIeT1p>jz%GP.(:k3Q\82i~]" onǛ8Ńإ0~)^O7r&zxws^ͣ=&#@C!$2C@1Ϊ ey{)#lH! dLp̥jrHFI)b X:p&%8 45SaAmPI(ʪ!289"jOJg;D@aIb)f( iF0Pb̭A!S#"Aĩʚ/ jƙ%S" qY3og31%^48 -d-)U6I-eƙ{8UϔFN"`&^䕹A7JHjjE˞ 巇ɯ~2 5wFm0]$'ˋ`Ʒs"gLE%ono&fzAfއ= ~>0wh:%,^ơsAٍK.70<5(&eAr@_!X0d ̰l12.vj5/f3K)ύ$B)M*մh{)=E)2MJ$A:!ZZ^D)1ӗ`P~|Kga|? M弲3AD D嚆%QD#8I;Gp4%;u[&^+@)x4>3J[1c,sf3ei(wXȳY 2%,gd C//bQQtz㐳ent% [KiuˍA8\$Z (K-ÔI NuL? 3^J֬P̨)1#]L==~/rES>;.Gʪ'r}_"xZ+r' ow{{ !"b-"3pUǓl!.noݼawo;f:ƥ~aOO|$p9LJA0/]䗏B )8ffo' A_ 2`VdfU[lSD -SxnKT & ,`cURr0(C)&+C%[ۉ=.Vu=I)KR6'P ϋhbƣm;iV#&ˆv f9h{qP[XziV{-ޗ NUHnkc8$9=^ǯ_ғ23o A·`õ"XC5윶{2v\7%L"ۺ#_D^Sd =ĹaiâT8K&).JȳM'x>?Mtms<O̠ 67};L!3{eZhDЈ2L!ƚX, `B"x؉1qW#h,q`%#NJq!Yn, h8,YX!=':! ^*u Ǭ") CFs-I1CrBR`r/b Xva FzK|fxQt\q)eM2aEѵbګD\#JnbqP ZX -$3t5E!ݜbvIR(ըk}rB=OVgFG2seT8d Af`Cd^]ˋ_l璢񤶝;2$kBu]}wi۬ E EլO4=&n{c\!v|ٛۇ}4?>z[hY\/$VH{tl#c4qxyf*Gbn9*d*u%myJu#["/ս/X`0* :032+-Oe  *Xuyx17{DAĂs_fEMma>1p~jAy\ pboNWAw)zw~6`:QXoR|Ͼ=o٧_wlSZ!V ]iܙ{s=2pnf' mñ鐗Ps-˝go>|vLcfQ0V lO^XϹ3!C%H9T-) ͬz5R%~-Q )e%!CFjSJA ny`V =!^.[kp<EXTO[lbm Xv~=a>欍]D6-Mq?-%1V4o$m7gmR@!VwO^OUw~ a"V\mC)Div6s $IOWQƠ3.}cROψR A0IDLf0b`r D$jjEJIW.X-)PIkvYq'\9W!uXC!N58C2LnsyPNR-TFsUٱD `;y<8MAzT8f&R nTɥȤNFk$!F% ?UFsUx"<OsPlXK8"ZcEqP0֫ pAF >v<,ƲP#&c\h?dDOPMT<^^P?"*Zb?a@:B`#઩[ v F jxie$wӠ1`WD#F <8}VAڮx8m2N9Hͨ:C!DFأ ߰aF0hB# B0# H"$l8[:^RD蟗Ģ-S+k*րq #ٹet@?:G#»s~:24g4\pk - S<zJpe`JC^_Ad0D@3YN4+꽎nؚnEĽ&_S{4rN"zQaCVhRS=~%U`{=j9K?r ^G˛-=U fz}skק+-^veרx}zKm]Wk"B |wzWэzWwd4Ҩ7 v -:ݜSZ%H68l BLClp[:W ^g+P-ĝ, "1WEU߹-JGpUM%bH1T5\ɝa ZR%w6볨Vl7H?q#>XMW"mAbގe J#3 Ra"P 2VA}FJyuY|(UE4KN8 \ bD')ڭCK"33#R5!!\DdJ[k v Ftrz" };n"[EtE"4 Wh'U3e*g kH]6d+Aneã/VÖJ /*Ʋ/xe tb[B⌱AtX}5ea;BdPÄK\Q]IR2,+h,Hu)+QR:: 7"ȁP9ټ@CW`}}5ļÄ4S-S.,a9[!6E>:ayqu\LFC )e՚ LG펋-O0 #"߈ޖ}G#i vtD ?V>=,+TG0F(Qj)^UǶ dבuqM)b5NXn^")Sg7۟jȦqA9jM{Lz & )I/tM T6D&4!!\Dsd ޸]n!h\ bD')ڭGv83R5!!\Dd[kڍBmY-}FvӋwg:nMH+ѽeJ 4my)4[MIdQCa[hKT_RCr}w&@kAzf΃E/,O?›!W<`ecTxY{<#~cyxy"LyMޔ Njc`Τ̱B85S 7x~ ^Xv`Of~*ȿ\^=*?.o 9oǾYa<|/{vt~o'n/W7!73>yU /7z˷gj2-E7g[(FZOg/x{nͯk1O?=/Tq9ӳg=~#n % 8]ՙ,RbؘT W $Y-< j'cFI N."Xdž; \@VKV`uA+ ,,&9H-X3QGe{*SJ$dnS'3)k٥;!^C\Y^7$Ug[D&9fFHJN5'*J[L%UBQkb%ȲmΒ!`1 je ƿ6Xk&މ {GS $ Mҥ$B9ACF(X!qx1ż8R#]@r@bmn&܃W3דi8X_zEpWݽ_T=vƮ fJ181qi݌&,4 {sM nr1e'8aI6~ 2Q\rӵ T >v m2ZYeYI$;SPuoӪrަօbhaZYx'+ $gE3'yp ~~Q4ZF6hvf>ޗKkwI&A n1m@دR Xohx4"<"'h7^o 3Wy1mppppYvruWH PCȀ`hU9+oވKM"Kj5~ y;G}伱HhG|ZaG~ѕoӲVۖ][pE#+|f46|uC+2@qs׍E;V SY/IVzk JT;J߲`UCR i|;J޼pHHbGٰ}a%SxaQ"D1n(;cG\I(0AՖMǎEŎzpQ9@?QފfT5õaᦇ9m{DEdA5Poz.`i!\?1,UpR91 *RB˨Qhf]nJ"vAs!rqnmBX%-L@HNsjV!јCac&E.:kx4SLʁq=Α6'dOz9ƔZYgvܤE M?Sih (9Rp.)㡉 TLЊR!1jkxcJ5#e T?YE]rDè=YehGSQuٲa!)56~L"ǘl 5GUxh#ϿD *4wɖki ;2ObsOajh;' ^CVF_A>Gc Ts:~"_![YOC猨IsncM#tLA{ v ő-`ON9qiR%↔4-)UPO*(xkqk qeR_Z1-J+ǕŒk5B;#iBHe\I)Ji:2^(’ *UdPц~n!%{w>2݂xG'{0>UqòC^G؟)ɠWafSyqʲXC`[F^D]N9-{sƻb9I*֌)?2DAqo!\Lp '&[2R'?p8Ovn\\g<6~rա9"Wi %F5R*JfA+IIAlJU R4ԧk,>7b,Hu4jwn~-p{}u}z1gÒ\?h4WsO{X7l75^2lako;]E /2 &ĆJcS7rJZ&*+Gq$T3PkX9b2Z]ݖ!figښ9_QeuJ"xp Gǎ{e7:f#1>s6oA UMKI]_~d `s. l46Lܠqh<u9u&W=C qcCl>. ̻\2Ao/C >ѻ3 \}\R= 9<¬o_ܢyX@!Ħjd?]WWVE$g?O )^#iC. o|X}d_7^S* jXj*5eZ*FDK#m#@00r[HZ+{M)4'bʇFu˲%\,ې%Ԯ|R6/)W_1KԢy>飋J4i踯FH0<7L&'G˥ i{窕 3HxNAZˤ򕦄!lr; Z= ̟uW2BNH3iZ&}oW>9^'쌑*rT<`d6 v{8g(I7zO`g_^VcB !|٧r>*ClAH2՘%X1(ǰ%(]z0UBU$^ W&J%f(ە0yƺ?9k]E˼C^-y2j"\#Rgc,UsQJK+X&ۙI#֞أlN -wbC?. ld{ !n^s`{ɖES/F=sFo﹵G5MǤ}4a;qKW}WF{}^gcKS3­Cj K:6@yZǤC9i y֎f5ּBQc!7qVf5%~:Z&8{.Azŝs \+|ײ͌jFfjmA&bZLYZf|]ut-C᭕*&Ƽvۥ0t~9ucx/YqW_*e%R~woen(oh!dJ5^{(9LDZs3GFla]5*M)2f^ B{f8>>ͷ~_~x?ںxepy5Lt4:h4jt|-{d'N<SbE}7Gcwt*MAPP4P78kuP Dt&h-$ZcNgŕ31l![+J`W/ ('d=7ލӺW 'Hk<2SV*^hATΚ9^7 ɹ_k|ȁ3zc8;M6]ٶEg352z"%c/xnX C rVDbdVճ"I-_ێ9uPY-@e-N)eSQgFnPD#lC#+jD*H'# #ӑBr-^-o qQLrrq1g͘1+ E#*!]v-MqrhMbni듽Z(U_dhr5N ~}xL39Y?V.N1!_燨,Gg9*8Q|Tp31;ia3wdDBoYMIDz|}L}#ex=Ygq%=O>D5Q1(bL mL`^<'!,#TG&İb~4?^Ώw:?i|޴O߿̸wh]O5tx\ {~ӇXʩZ|)^7@L6bl10|`*S@'Vz6) G3m@31ȱ6b15 vc~J8*9Ti)c1I_)grtLJpJaEsSŅB聘xC;X℃Hu^ CnQLALjh0.R0F$D h|˩Y~ɂ+%9Dp:mFfiN{mpAPgb<ͱE;Pl/M}6R˓)BN<,J* JIrdцC # rF_ܼP'@zT`:٨8(Cn˝.p`h )PI5G`&|AR-1 $kcvDhHDTWC *% \ FH0oNim RDeQȨH1f"gy!%Gg 3|-eAႡM p4i!-Z9LgF)bEt ?29tL|& ,Z&)ly{}va9 kD ?Af+ۻa| C=(C'+W~8Ч!#LU`g__~~7Ĵf2ۋ%t$Ъ fxɟ%!/F1.dxr0Z.^ް&T{Xz{y7zY]e醚bx@cE]zM43Fs!rQl f(*"ѣ(gzD(Wux ce4ŌC3ÑZ,8Ԧ.ʨ!(؜g22}U挣nѹ /nYbfӍcpu.nָYA%tIęZ"7̗1Qo| ﲋb́7|Ӏ7<Iq}s=цq<ڋX\ խ}VCJR#1d`o9F7Mу0솄 zLЁX XZ+5uJe7l1Z~;&&R (L~J #xTCg^6 k$6S $5&rTv *m$Q"킺( er 2o%r_^%,Ww),k ,&I6VZX 5rN~{eDr.e&SZhSKFs[웋z{ۗBX}(]Da+]> f\MLGf`&yKz(hcir]!6[/y1?ݑ7ᥞ3oHӨB%R+k^a K*ZrY!6*CNOIe,^|U6ݢj\^Qc켢 p=[H3gK*MG\+RAO:X٠X[Ma!ZiukA{\%+xHjnX.ly%FZvNrXNy`YfYfYf<O@TDITpA(^0.\y]BDHH>Iei8MjtN7q\W"VDآ [g?\fُ323swݣ} NOx@Hȴ !RθӂC∺罆 ? Q2z/ҋO]xB@NOv 4q=G >"JmrIEPa Ut),k ,0ڥtsaT8`~ISGT2_DF`@3.O\!A)NлE4O)bNjw\9|?*Xۛ8^NO)2|:>>:LGstw&?ǃ0<}Ǭ8I,餺J _N1y1¥~ALȲd7 rj9=DYy'u}B@JL{ YCqO|}„T'.T- x}jC}b ??ݎWفʛ,?'4C5z{٫;)e + zګwBO+3=mPS:W^h%밣_F/b{c߱+þiNFya˛_Q8VӇz&d:dHk^plx9s8xm \E vtSaP`8\y~3Ч_Fy U(fEy+o`X{??λ{|Hc} 4%> {|<\l6V(ܓdh{$7) ~zRig ]',>mo *O(xg2 0Y\(Ll;ϵ֠`<{gIC6 r*+#XeFSLNM 89(F%чtƳ a41_vjRӊ:Yƃ{TSߤM-jWˀxi-yUC8X=* _ɗHy 5gujO^( jF)9e8vR\hD9a,^ 97iӅ (>/ MGi27R$VmU@J( w7T>P!(S4,D h(cU7  Zl' %ٗy 2㼣~;Gm1n\6JUlh*{WM8X*`/)_ܒ {w_;Nr3? FI87p8L$f!H 5BǺpO/aBU\OGjm #[;%za疁:$s6WUrfX~lR_46umُbrƆRu{8˛e 6?jbd?'@{f{;~w.WGu$ϡPhQokh?c!yuxYOh6OPbDؘHv]/}S}+/L`_-xipU.6 YWn)6Ue Ŗw ޾-I}Gvc'9ӝyޭ Mxbdo S}t-w瓉)Eey opT1st(_CY10}s=M:l ݱY||w9lv)EbH4v/װ6).uodcvc ^R3$0# U"Xq680g'pö{e$\r(\R2[am+ʍ`!4GgQa4*C-bBjCJ KUlvt~:S*͕qI^n6[xH!<b 8CN0k}4!Zfl=@<a\)I<8~=4թ<;MZK&A*dM/&Fz\J+Ny# ,INIi%v.[I _9&m2d\~6OM;hf#&bv_]Xh 2,󹂘\kES\cSgt*/kY|컟{,l` $ ,I zzgm"a_X-)[f F8?LM{t17 ="[WăOGv}E5wi[_u8 r,[LĎۀsZkK V=E`E񕇀dzKWA]Pwa|'pH\`J|z_T)y@X ނ{65]x -b/<6lc+^xQӎn_bI"*BW:KS0E2_! J3,No3gs_&u,&XME-8[)Td%_\"ǵR[ҙa{/ *En0%P?QVG|J}`Yi:`ZVdf%hL0%ͯ|1_^fze 1UJ4-JaB)P㭶ŤNm%%_aD$sZ LfSI/<s숾myUE!!4sօJPL%tI^By 9!3n~RBK~V1?J0FL@UuקÇGFa}\WOR, 䭖bUs!iw-Ŏb姗t֧eG+hX r@uA#[uf[QR=SR͖bݩbuQu2uxfTNc[he<*8Еn"0ҫ:y ;l9E4gY^8Z [MG2 f6XE]-xO }Mw >zMa>X3(Q'b=!v1Q[, c,+Xg4P#-gb?$tjcvю׫vs75*lr]5[3$u7ǝpcZjdqcDxw[8hzvT'mC|'顕̺+- 2BL:TQ+jmƅ$1 1bz^=z\hF9P/N]|s[5J%R'`'S'.$BrUHkXAP88#R/뷚1az'$^+^1ҟf3;ɣK`)a4|)=f9Qڪ'ncp4rptȈ$@3fitﶤT&Iݷ-1(1&'^A:4C١"MTQqtKi:QQ=9MWөtz+,?gWm<(D3 $2M!1 ~zbռ'Ƽ`o tXjx\QTD]NYL<NKm.>*F2C: I[ r.ݚJ@j!0&C}q˶9. Hzq"fC9Әx v.c+KT\1$%p9EJoe^83gXNs!smvTIsꍢo^Mg  v r+C[Nh.|%j hJϩ)qI*/-V݈J .!եrbKHKVI9)Ax S)Z)Ad\#%[L ӥ( [CI"֨<}1]\ȆőkCuAYC y$< *̴M)J9R1z)\.r{7{zdߩW7#?5xb6ɭ*r7*'3 lxW]ػƍ4W,0_ݫ )ql 04HVr,{2"eDJj-xNݫsVA~*?b"BR5Z.\;{b8ʯU z6 @=E(Wy~XQ &G}0k[3u8 9NQ[MqqV÷krT]s-H uy 7%ֈ mELD>WXmtˌUNKYq l/aa7֏CGV-i#{Yb5Z$*LIDYFQ T'"7_&9]Y {ϏtTliԸ_Җ9YPeMϟ %KxzY_DK|yƹ@a.[ϖ3|XG>ɯM% оNPVn6%1bsԂ "d AA3$/CJg ӝ*c?nQ¶ES7nT9P#FhJQD"u&BNV`*u]>7>|1zLr6Mn{LHiuwpOٛיN+L&[xW΢gmpI@ċ|sE>Qب5J<+ơ 6i1cQ~sF:@&>ڢ%K~˸M 'To{?}vRgfwHK8J >7Gꮥ0UHZn Bb#A#~Huq#Am8xrHl 1y,?>)ad(aUFC-z@u"Q1׆aRucD!BKII}֮aR[R2[u*'ApN:eS$SNU.мR) GnFgo'Va+ wa"횼2=;<d gV w?k&1#*z;;w<ɤ4QYD]QSq"5̖Mx-Dn]9gn=.$0QSE 2aH˔hd-13+F]–iMnW{@p2Ҋc!?$q]@WX#K֌AdRKG#9g{>gBޘZf uQŰ6qڍ4JbYd""L15^)dJԞXrĞuƝ-b^lVX"#R $` TrXB`DIgG1Z#f/p.= Y@r~-[u# յ/i˨:jX)M+U gra6xew,XIRujԒMј?(\Χ^ @12S,wqqnC7/ia;&[L&3_~'bkHmlgI1wEuK@gl2,Sr<E%R5lM;}H/fء|>h>+JO6d%*ղJȅ&eNuӺ)FѺ54}FH6Rn_Էuk\x=LI*0OYiqJo}{%uiF[""Lt.y伖{gxZk<{橸f,;IB"I dU.띴 J{#y`!+DK(DZ}o3:}y_. JA0*[m+zX \@\8E0pi:C}Pwr7=&], &y2s6uKLwo32lISiIȌ %Q?Fl<*g(żRdr FyZZ0SO=_6!~#ŽrE [Ԫ/gC=6z- F[[ArNTrGa'L{dmS(xȂV .}r׍?烣]XO2 BAa;nj=\I,v #)|WS=P S-1?mUa+9ZtDcڳ ^˶%Tg4QGDVBX(i]9B%I%=ĔZ $tٌ IRGܻPիp˙qVYu )|5~A?[Mn5(Fn5(6&i'6y]nu_ޒ4Ī8(hd(?۰8m }A>/NPفtx%D|f(H"allb{O]g UIҡVoAc6bⰑ0!DLzlVՊ{·󧯯Gċg-AXp5aGS΅X猬L4d~0t^О]gEFY0@0GƓRfvAFc}) ;9zN A8%+So(y/ uDuп7ATAm/f6S$9bRQr_Ԡ`DD˰_^37/ p#|Fmɹ.@@H9LHtz y dtjH>eН <1)De"L,i*HG!3 I >L d$aO-;'9L,MaE LqJQrLRDqQd Da; [T3Gq@6*=ےskp{|跢^S?!I^{t"ŷ83O޶kkI6}],W[ypc?EQgp)g/߯G}3_/6FT08ͷ'wf*`ˮ|ӏ7] T %~[8SǼ(%uHr,QwiZ>l>ϟߖO WMa./YOlU,/έ燩mUlM*{urAof2[-n,Hi|Cη-fijvb9Y/+rYg;fY`;DpyY @PR[|_V+Ե7+!QUI r1LJ[1( g4Oβ\w^Y͖ejpQ5 sBPW<Mp>o`*7IXOmjmd7 QCP `qF>\D1KRy ,qv*Hb R?Z)pWPʾL|f[ 7̍]kXUq&^niUU'۩}t{6Vb+=/!=h\87e\[D(dZ :%0TidpzۚDx!yL yQN)7@twB ᠥCiib'YJLj#H4LXtcK s67dJ 4f2nc2v\̕5vXL ;/Iz|^3w-4̕6NQ53p͌OlRistV h$g~+A&r[lrPSd/ύ osvaIkAx܉Lַ_-9"=:ƸZfub[UҕfmMIu_S9TaT_=ǫls_o܎me0č>̭vݒ",#e3c TRf?Jg^K&eu>O"J&#$Aa;ő=fTM]l 2HAp"8ǔ'(}&jީVR&ȞzËC9΂V)T#΅ bdg;yptO:-p"- qS!6'NX~=~m61{V|\ǮPKJv[#Jpɩh0hùDK\l-ʇ$wˠl_D4KLOƝJ1NSZy_dAgC2zڧd6@!Ș(+vQ/S>'S~ɭbi&6Y6[~oeVTmW~\rwۙmeKd]oW|)]! Omzp iomt8$7i~IIW/ݕV֦AXkrv~ 9G~2yBp?}3}}=WSNjM̗LZka)Y6#,P pL*tI<ңE 5P`F#$OK!C\RaRV9ԠF_"3T&ٹ ~nae#cW/C K2~NM\܈Kԛ_a@vzG cJrO R#cM7YJ7p736KB'ek< 㨋y6ƣDQ(T!9%5Ta]B"]o +0=7$t1oN.⪡JYiR_6̜s ]/=$!ثVd#W>|k$ *K&';Tb,n7&N~+JAak*Y_kՌӞ瓍\jz} ZKpͿӬJjqh5sVouKwKN~CF .Io2]O\xgal(U** q\TdCK 娕k/3y BOK)Xo8Gy 2廜HM'z3Z )',f 7_̙!6k2M|7g4Oez=!ggȉl\-sxi EP%dW#?#MzdCMV rXT&RJ &r,?DY+bA 0&~W#NRRŠdz>jEҴnDP l$9sa֚ĕqMU;J_:((s/U%H mO$[xyn"LWHAK#՘h5^pq{LJPm)F 6ux߯@ɳ5uyU>I؀ZzvGg)u jeͩ_坱?krNt:ђ̆V$5 f75_dOw0]N6%L[8),ƣ$awOnaë-O6iZ- VN&RK?/Z˹8Ꮙ ?miR݆+ڻ>|&)!JB.t)"9b* uUxy#m&P"k^ɥD| hr~٣5<}tMF[C Df?u )29rLR-SQNmoɬkc"Rv4Po_9 /ViQuw -@Css7zg~ U)R/v⾡tb;EkfaAGs׊τOc7Vf!l6Z6DX jws4JI^#R>Uĸ=`Aoߍn=T~hDtSZEk \N|dC-~h@4Rnʟmoi8&) &) Ieywr3VV$PGlprOve gN|_`CrOBr6UHnù!K2Z ɭ4\{ѡES@Z"CO(n{A1AÑ!DmVzlQ-z5И9;\9^'B)rF9\bv횼0#= U#~~ v,/glF):mIyʫ0'kM7au1+"KNB_NMiB(Wm19G.'yr L`:~dBun% lB)F ,c { ٣ *o9=0Ӿa%ADTY|犓:կUn.Nۣ;_`vg'TϪʮ3]q@(vg3Tԙ6{(YЇu[`ƴO]n=J|8Ex:̮^j<1~G̷,vL37ebs3…&Pvp\PtW+e8<{Ae b8<8a)>~Q8R3 Iײ0ꙛ_ų.~e= K1^"VoGM?Гۮ64xQ Zj6xC$})1wg.R ݭ|юK auQQRP* ؗ/A\$I$ un '[ N`v+u -c$:e~N(V4=YWsli]WUL\Oş gK^笗-3\%ػ;Lc7TPݱUC@N#}*$77C0&8)4uD0jpl9&laEr)wCQ8&2D+&XW' ;y,??m`~ XO x7a q.n' Nõk"18}׌tA?}]mRo]%4p8< H08\$Ѳ8ԣ%4!(\u:=eaTӹUFnhs]2ֻz+qHcC$r-\O rEN(f^&́\pћ ЋݔNyyJ$%N~[1sCQ8bJiY1v3',m TJ8JG/J+c݇UK_R{7dXӤƳ ,lʦpRߧ_䀹}ҿ߫tAih\F:_8e)*q b[z !~3z7F*'7~EDoFlc5ʏbwŜPda4'(sF@~iȧh4ei: S)zWt1 8tZ7C~yr7A)+!h& j5OԆ @D/=!F)ǙҖV:U[=f9S]#MR#JU,'naC,+G$Oc~Wb+):o_Oi;"owl+M|>H|!xѧ1C (3\qiנ| %&\ڊ5f6d*apܵE|څTEsa:>鍹NAMikF+NZ^R(׺$yBY}^R"ɚSڛ }$׳",/ ]?BH,=, =s8gPd# 3CH>i ͋A1Lh2| !!,NL~k\U@`%?# &q\}jX7I׸ 9NύU݄-9^r},QY颣dz.@'dٻ6dW}Y;T|E,.n6; XE,vi01̨S{5g󤔰`3 L` 2*@$|&KXK H=)c,Xby`5R&`r לwiI+RA,N8ѕj]ԩ]L*!-֗u|fw㶗JNa.2(OܮJ \vbygLNrDMRȐREPm #%%[Z UPD(BڶEi(\HDmM\JKDyBA Qw D0F|)Q[:=.?fFˊ?:RQE^tO_&KtLniYmW5 j k0ȅJs˰[|=-WW߿؎jr.~0u|zO00`9Cn4ާ{{_noj#GqCo6)WDfrV[v ҝBֈ"sB X6rýub%&yyFO!YM0$eTR[pI4F*(ъ lYʕŔR3g2DوgFRqyVHKX%)RҚP_i bu!( |Xkf0 f)_QL @EQa레w ,ўdvքf?H*XƼ_ 3k_kP+Ǧet/?eeepW/=iPEsg]j5WP7C!w09L]ʛ.킩Hsww<.Q%phT{x?F lRЮ=lpyשּׂs_[ F:1WҴwPHZw**=;i@OYy#7;yPzaN~Γ`pui澺NFzu+ʸ_/.ucoʰ}kQwᛷ?eIgG?G 2$*\q dg }BJ!@R34͈!}ȽIXȞ9hK|VOlPζBFS2MLsTb8'DL(Q(Y͘UNRSIm)^yءN(5> `j%ӣ ?䘙Fj.xOzw ^(E kXV,&zs /v˷tq01aK7y"ˣÅ$W]9D{Tڼ'2052'?eC\t_|^'{&LV׮vF@ Tw-rol h#MlaM $# 25De"VT EZ*{5@|g0Pa6n~mƸ7ȏڏ[_7wG;:Inx-&𫾋m41m4WFShQ : [S¸M%Q4iz%A1!9w *5GW[<.FK1 =qzSb(!zٜ^׻ʴ3p ?h}SjNhܻ&c!,qTp nBItaw9 GCn=;5dF!2h]ݖWv+:\ÙRV@+\X~hM#vh.Z-Q3cwIg~ٽFD1Lu2] z+N۬'qJVScYn 3v{ꌹeFrŸ.|vb?'xl&P]TfTc\S,~U"mN>QZq)lcK œш6eia7z>u).d;#F4#}2O,,xns)*3;0̻I N[;w+v%luYUR8."ox~N['4ֺ|}IYil5m/A2/:-Hg{)=߽KD{Z_6x!,UXue>8,KȩTڼH$nM~WNQy7K|-'V/WftTTCLN:8 wl܈si"&ƜeKbAa )sVqER,J1c FB~^GՅrP=\a(?G :4F?31~6>$]waޅq{W7hI6|3#ӌ[A(VTa0Rdޅr{ʳ,qsϻ\soЗt:kAӆn!i6` g | 6{_h~0r?xti-II[:RD=B5*3KWJ%R)0 -B: S [ާ晄zS^-HJ["t+~ _-=N89 1ǥ1,ː6U0f!G9[䈒AF0fX?{ƭ K/P_\ԩ=9I%S)\m{+}CD g[Nʖ}@n8Θ<`1b ˱GPREL t19/Q3䎚+^᛫^UNʪӛm|y&rR}7VݗI꾕jTg\H@LdO╬TZ U4XDVIc bJx7b'p䆡X.:'  4/>2~jGѺ۴g :F5;r;s%}OwR1mIxNpЋJ?G6kNb*Gk%pܩVEQ(U ]þמ1]yfPR1pb8|S~xbp;x&5 2w0웏5Ir]5{[iŽۣ ˊfγkxL lQ&bX!r.d% Jwv8p)׬ Ef07Ͱ鼇tpވ Z.[߫]șB7BIl mt,{ѡl{#"nmDT|kM' 37XszEH݋aD볮EǞLKSq}Seˎw\.A؉"JHK|FE"R["WGƲg#-!+0lq&w"Tj|vVQ8 ޕd,r%@ چ^jOY>k/b=|R4#O_81ewlc=']#b5 먄P){x 8rXDayuӘu7]!/.Ja0e)Omi F*إ[~+| is" Q1"j.ٝ㸥QaB$<0g3;HŸ\%xGMYZZ40Cq?w}QUڝ!J鏯S 0_Dڬ虖N0Txr:iŃ=?\1I!a#DZ +c\]0^/^}z}Ww^kY HmR('{ EjE:jj#& ˕A+X)BFHb'$ S"dWWފT{ZpY+RRv4`cȥ79fgL pF&sKc81:(jzIZ*Die3bsB aXe"bl@$L5]g*OftQ*FM5WR^DUJ.PE] ?OtclՒR*[QUX=(mo/2B^3.ruc^؁YcW#XB;S(L5!3 Ѩ5*X4LS̽> !ri7u꾴nR'Me%.[FH9e[؈;GPS= &/ @ N'Xy!rQ!UiqbYqa.X^_yY[L ޿Ƴ*}-w`.eMc \Xj>T~Ը]>m:;Fq_ř}M]wK%|e mU-|6S5\uvSsnM1:MQGrUinm|<4Fh_%S?z#P>?Ėr%$ؒZ cuN'V#wB}?~x;Qѥv,g,ؾK+\.[=cXxG+숣ӋEQùGs1ڊ8XElfvR9\{[Auȣ ovS$9WgEԌ&18G,:ptt #lKu -ESwYHՐR q{R"4m93o~*383j+w)[2R j -̣mMV1{>?l753.\?F=I!P~zaG~VޥC ?0V(O/F`,#1{y?Mg>R1O7 @z84W(pX4J\=zy.Př',L!\pa '|q[[^cg.`4<\|?S+5ghkD"T~>HYLz8ΣXw;QT PV1.tt4X z3&Jǝc8wH䩹wJfv58(o{a Kp^ grjQ%Ɖ7+q`xʅZVO.8OQ2U^#b7}?O\Z ,4l~ZrbOu1١'->$eKdo4"s7oN0LQ¬1XCx h1 WƳ6|ihŋ,,"p@60'E8t #!"ҲTmn vԯ7Ya_Ǫ* LF#4@ Bc$HBdR'Qs IJB?Ԓ(%5u1# ,q DQxe!:$)#'n= <Zn~H"O˜w,/Ň4=M 8B?]ߵ\)i DXn3w,^G`NF ,aÛ,͖Xmu;X,'GgPQ0X$”,b"GB1&Se.8cs,p-Jj%3ϥ "ɑq7R[3R'i^pdo^qk` ӎ$w池ƴxhcEw.)X KI}ua4`-fxdf9e@@/qJvn% NoGƞJ| T'b0̚AeC"!+oN~HG#*[DG-ue.ɧ*lna㑌ɕ@o ]{wTk\=q2oͩl)oyMidM2%}$ògW^ * AK79 Z˂k_XcjRweh!u2Z1xGGaipS=\rK1&CkԔFt;!9hξqj-X<:RkÝ` b\N^=×zxE堬sh^=FGTCG'om\KE&aΉZG҉衈2y^OP hې209Qf>Qp.敭e+qUNڸ/pPzģt -T;O%@awg}J:#\\H.0+MN +,R3T`l^6%jмƇK1Q 8!NkbŒa4cQ `Gx*(tJ= DJ/al|hti&"+^l3UP jgu м󼭧ʗ洰|ic ֕gf[`MTs6/'cp2-&Pv3Ng& %?SB xK^谬ڣv` |HT|fi{B WZާiP[1e|7'e(wCppeg7m(*Ž,53Z"O^>IMEHr&ٴ+-x7-9[jY~އomGW}>w*ˇy,#`@A_uQ5#-2j:Y-97~0vL mes8)O4,f#*ź2%V/ʹ|q[#2h$;"IaK~^r1v/ VUWC| fBcU+-6 IrP5d#  s#%D{'^6S cx7]ު[%aRMo~w?2 5n9,ϩObY-. w2MCv۠JR} ]YsG+xgP,a%y±;+yQ0 8([&j c$:2++#d*)&qHPW ƞnKt&1&):m%ju ZV.$_<84úVw.N5'%椮/ Nbcx#Qcmv!{vJdn\gM"잵0z8Gܿn9;e݋8Ռ*.!Ś͜W>6{esn\ *Y!.9RSQͬ7x~dȹ(D+6W KR{ &J[u`'<%_-K)EqF3agy V$՘d;Xf܃ m:VmjfdANk+%h(.l3V%%oQќJmtpfD?q65S΅ֽHloTɓyp,zIv k5#LX/u>iLcx='1 R+ uH9s*7 SKc7J}琝> qx-y.$!RȮF\qK>8Zr\N1YH6vRp9<ۃ;mrRVŰi~`-(3`&e݉R4 {Yi!L` /W",z i'"5A2 ?N Iϧ'X| GVj5 $z%`3GMCK+@JJ^̩=#f:V\ü^Oa+-Zl>ojJk<@"⽄ߣ?\x$$o. b?^@,([2?d]Pm8qv/1C ޝL]Լ<11dσxÎW` I@h_lP$ֺ<2F55RwCI㇯ZW'`(@vHVEJTRXɡ0 ߱SȚSܖnPwW : 5ÍA}[]D!-A$7ƒ{2eOp0Ǐ)z .PMK47wzϫfowk6Twв OB16ȝmlMN"ncWZڪ}l-߸f8l~nQٖE},ΑB ']pi^ݖ-_ߖ]Kz&]^Moï>U0; @^F87'A6I/"1 ?ݨ7/\>Oހ^MW(`r^i7:s,.vnbM )-0R0*x͐Rc&47K$21)=#޿|̱~ӛWÑUasHMjGozj: a@OgaXf6fOӼ:2 eh+w~s)uh|J<D ;bPƘ:1Z* !f愸0nZG>Ž(8F9m%r+"'o^pe " O Ijg^L0!ęZV@"$ |: 35>; T$6ݰ=A=sncUywB`pܑ-W/#$ ڽcU/y!)obv{Sa~Tf_ :"y1 +g[_[}$/p(U.Y9wA6wxU.}1'5EWQ왫E^^YO8 % \HF9AL$uP' 8 j–@]ڲJ,兤'vKEh=РDp<>Y_7l `? $ nē L,`w㶇 $A%*Х,܏ʼГ _zSq!)5zʬ~gk67MQ8f?Ӂy3 ToYu܉FL{`oh&`?7濎J{fV71ٹ$RxrYuc ({_-F# Nd%aP$JCXJhu9,ӗ$JAܱ0FGhcsfl|+M?S^[\Uά Ċ$XMnaF":3{G7vDXtDzsIVvS/Z=?_P\] ̄s0Dki sӶ]푦0:]'j)" &5okf {J29۟*(j<.S(yT߅i~<ڟzo~dy'8>hŞ_OG p4u㑺ן/#\ywK[0|*bbDiÈ>Tē݃ܡSź K]M$* 34Cka*̶=e"Sٓ#C@ƳQv;>c$^ =w:(oё'Iy~+' ȓxٙ3`D]+Ʃw{|גύ%9ӕ||yyǙ$ mxc$2CC(qK)_h) j,mehł/KnWiÞw6/G$.PG2KaOhjkˁϚUeGם!̩:ess8k$)eb:Te_N=Ş-GDN ])Q:MY~p9 (#;4&5(p!oR?{DbooRELl%@\b(`6p 7E`MUI]tnM?bxt)$3$uFA5#z䥲X!AQ-J.Ja1uc)y!a" "H{-(ts )$p(ZϥBT :TCY#}x0ʇl~r[X\1a c"hܸE(㩵U&s*("lJ,i"$f%V!e!0 q!Z+t>0q,,_' )zpXm vPI C)W8YcyǵxBƆ8BB` }뢑Xm\Bp n k.x㶇 CfO(]2*%rl9r QJj9k%RJ;vLPn&P703aӗ9ƳQ{4 oe?|[s9̒ț!*&p,L`4#@Yy 0\ݸ]x-`IgokGM*ILpTv'*cӾLAUAK(Q0@"X4|Gk GaWTw\Q;KwnZB7@凂T+[K: ՍZ5ʰMgI ː1ȏJV@4 Ki|AryE]|DRs1j*QQ~x{eʎ;\k:kT yr:Fb z<dA6: d LhټZrǭ$ Ajy+]tEI8Dpm @ o4UմǤQPenZm4`TBQ!GJ]R3v@༙\gZ/8?tPY ~|SJ¥x82TL w@ Z#(m+(HDZ]:KS\㥖a?Or9:,W [!2&W9$< T㥖 4e cI*v4%&ku*Dg1d=UuXf3?y,{\þobv?~6;68պ2o֣,{[fQć7 igf)ERܿ/E+\n"aȗ znZrTrSu~ۘL [U RTjefZGP6U~٭ y*SU( $f{"LjA*.O #^|BBZhrP ZcO&> 9;Jp6 ߊjl 'wZ7W7vW\\$lTu˧vV܄h+ -y #$/KFB@ۉl Ii Ij+0dSbDJ9hc'Zڪ}l-߸f(-O5nbj\Qq dE)Ph|R:YZ>u}(V]4Wkeo aޱ=^9?o}4?w|]!òLў8SE'` Ru¥kH[h-n'd5!nly H Z}l]( WGijpZítaili,{5MqWcgo4IoGħLT>PB7sUdS10c{?آa/Jf>z sݿzR㫻Zt}B.N\->aJOG +!mAMD ;bPeAE€ -H3sB^j(F}9xB@ŰU$No8_dZۭJ|9^#֖:*3lm$AOr2L{mcy8f; GYA#gLC f8\c)Si8hZr2oZZ?{۶?{3}? 4q +Ka,ɥd9& gIIK!)VI-g?3;;;;;>(s¾E.HDv¾y¾ y"ZIޏ9vS\}nU1(Z$*AhkڭrG}[ELqvӚvT}GU5Vڭ y"L9嶢0g]E+[Rqt -MDlP JE!A:(UlacȈGFMbjMϮ ?a 0I ;gNݻ;v: ޝf=}w9jߝM.Dnn6 M1'U>$9}˦>Ytw_k橉Oo[ p’)e-B:wKх.S- ;{k%FA8(ʃ9b =QAWTq}KT_"v$wG. O輿ݟOT"WB^F`c7|&nҳ!M3]w=39pO:K/oC Bf<~z~ #hTQ]6)aKybz𓧣஝w,kg]-q AZbr,%4JxbI hH2N1D3wI %7ןIBg#V}x<>|aأv>tW~!\@#hkƺv뺨f$$L$#xH<*ey 3*h(qd⋪GŗoazZKU-)d|Vs!:0bXˍ4RaP)'ŰW!LfLXuU)dJBe^1+ͦ kS}]b@{=N>ŭ w)XeXldbFF$"%\ c@Ȇ+R lQ(B@}E&k[)$mTަXJ$)=PGd{H $P-r6O:,0Սe,6˾ RC`_d!蝮,'} );yy$R.GiuT  {)I::-ڷJhxcU%J}KH1gvŽ*:=;;JxI\@$l!DӦu()a@@LhR;*bQ+,¤T!%7gtWbȨ=-$bo5f3)[;Rۓ 3.LD0Jnѧ"U`1KV;#SJ0YTN<(`)5e91G`Oi'9װ܂j! XlC*& 0ivDT87ǫC3F[臠 2Agebb. G]v3I/xǡ1F`aG4w7đt ~oXp\epkGt?ڛ>\$"GKrS$8onQ=xZ՝h?˵T5{4aHa>kAW\(p]}96D'y`fC|d示L8!!E"δ$E"DK.uL1[#dIbTOaK c |5lb]OպpX H Fe2$Ihe¨8CX(jB ׵4FVw(sT|6»ick՗TKql"“ߢv&3D\A/}scnFvۋПa)Zw_q< m6mf:m /r]~_d+>5i`7gpfl:2v$WQƓe7׏Y.]a`b f_,,BjX̮?83' L/?`!X w݂'ᧉ'wXS״$|U9|/|U먾\Z+zUygC#R!fP#ʒ8 aF( !1 oL$BBƆ2 N9%in<\ӄїkgl;d\Y83O`xj'$\c6I.ORK$fmqq[%%4B.poN#1\Y\-{߲Q8DxzEA(Ʒ(*^ӥUOUM&ӾO$&D8}r$d +E̖;')T#M&@1I|H4Pyƃf$2h%JƉYg 2_oWIb=oD" ;" E5jsT*07]kO{6XZzUZE3!aIba&bM`;&-k,liibF ؤkMOi 4/ş10I _(;{a͇Ņ{FL17٨mn%C?&f*3B&S iz.Vhv89NfC'ː,'-XXR,a2q%&Ql'ibXLL KH ʭ5 V2BAC[$0*L),(b#j 8phO<3 O>dY)sr34=t7"8Iފ;fΰ%e08? gޝYoPT \"5k2%i`+kVƤ.. aGW.|xn!]dN6̀ "\@; %Tc5G!EkTXy+dܥ^%;aJjD`W?{Wȍ C/s"aVql:yiUj1Lˣ=M)xX'%:$ /D"RXv8nyШ5١Cf4xTYққB/eoke&_SPXKU=UUL-s 2Av\9牔J2Miq:/D&^j/K+欕/Bٴohi2QD\lXWﰰ"46܁ FĔ1:p1)> nȉ."%G#-9W*0r#z\A ;K] (VA)գ'JlMM_W=%PIҊ@ V Fю*U#!M&A AӐר, y G`DH\e3E0s+fs2*jX@@@-edcpm}[P-4vy4kYq[`BI &0Ȏ1oa'nqv=<8]B}lSC銚v\u+Vp*hnXN(!r!?ԝ㚥 `}陋bJGXkQ *i0F4MK8=t' \R$3eǶ뗆y޺O5-hK׏EȔ=o?w*S_/LqczsR2+yO;뱝yrC*iړd+SH`Km}u~6a:=5?S=3vr}T>оv#E&Q1N+їwA_L gl}1m9ϕbG> ȳ.{p\40xwqݎ/Gr1_A&bɃ n@5pjnx W4PJ.Lv@f>gg]rzX22h./ʀ/joÔ4@mjkrÒ(QE&Rz kYcLM[uW:&FuN"Ib4뱵da6,]dk,[p責(pU/ uq5PYiS'1=tW.\V{k|>l\Sg)V 9Gew ʢpPV6m nT~+6аo=ϑv1 IO2=q^n'U6(N>8˞nz)NfnVmQ͍G[P44B< ѝ/dkuk[aȃ+oP9;$\ZypF+L<Ԋ7{GP+&ɻ^eWJX >r*]Zx537Q/Y) ,Mfӯf&rY>oM}ڑWeS"LÇb|0wҁ^{Qe\١Y4 =ܣ,[J˖-UQ:Ĉo)fh.3 ` C pֆv|~?sK/& %uf"ELTTF^ӑ>R/k0O3_x~Dw:9zyEgPp_%n %5}+8kq(pYeA(9\3+*&9~~DĔSm{NCWSoCh$1.[\_T[[U.M^=Q391b=,6@FQyd!u <˨X~~n?~{`4;>C[!OO׸=ǓB8>F#b]৴#*7Lg6f'pmx/!˗{">Em$N|qKATuYQʥ+'ԠXFC$ %ǥb %ZjTp6V1ƁAc)aj(ᜣXvdG:ՠԹixR1Fjj`%P~9崏uAr'@״*3Q<OBPw fzhܝ\^!%W{*>bm3w\}* }9<-w s !\#SLg-x몮ۨ߻ĀnȽ bŁnAN] '>hr L8wa^H [H;E9f_!f}L^msf: ]K,~V2#](Jx[4fĀWkk+M8J[ }jVsdyu#j.4zcW8#h>v|}W!pŘ6 kd@Q(q2V(?b! [_ A5 ɕ>:t\;MFh\weo@~)A,#u L9HFWrplsE?h.I!%LY3GV:q;XKv󚄌5iB]T5%;7ixŲ FmNnR1sr} l}k|jeyM]ugv ?EY_UĞUu*ItRy. +z:&K55}HC?L&IS//sQN͝P WyeU5LFC~4#kA}[/^4ohEsx<|ETKsT+ # g$T5g9cUoò]>g z%|즳/(Pcٛ޹+aRz5@i%rQj0D] Fꄤ&58{G,Gc 1)cAklBJr[hʴ)ij@j. ,Ջ F%0dprZ,m]ԗWd5 N蚾uE ̇rEϷ~ި_T]YTSywN31؟d6kh=ΦO3n&t4d( + .M;⪗ 7zfh h9p͆*dJ_iBhIè4[/_;ٮ,r̥@Vt"PI(׍H˃Mp^v=lwT 3t>'ynZsӠ߸1q46|alG3|t}nS]\ mI15>]^#"mƲSg6Qi+Fq! A" TZ.VV| ]d 7k{xL2c>G|#E988) Gm}S9syDEġMv'<8߶ kwTadv2sJ tkQ{ƣ-, ^iC;mywpˣp А݂dNn<86(hy\ikB V-ƹ%)xHk;%I-?6<˵br_G&;Ť\D5:jEAG鰎ZP]m`Ԇ2@%D#}TlP-rFr'H.\JA]e\ԍy8{f~D1EHPƸ#I9/EQ p VP@dDq%RPE`^UW? TȶPZS.@"/K;j-4 ˰AJO7?~)`m[Z xAfHARPh_V-ZbAr~sv%W/dv[3`:ĐR̠VQ|%q{\-xaE6 !Mg&Sb"8 4T䖎pbjlT"W JJL_@LǥhU+q_к6j(|_ N:μqHY_3 MiT8]w2r!ơZ o#ݬAΞ,R@{%:-HvyFi=Ѿ Kn,x?nȁ^J_v-ك?f{+܆}ĥlo$ WSt; 3 Z yW!#ԱE]Ϧؔ;~vЎNFt_k86zKъhcB%*, 㡉QnhHɈFwjL||rU;ϏшL¢}*ʯ[B,`杛ɗ]<=}[ÂߪdCc%Q`[G_K- r.}epysvSaE;F?tКW6}'Atz ۣ˳pGTУх~XtǨb !iht`WW8VZx IR\Nx2] nL^XzҾ^QJV{' ?EwDiwDji`\@*LV;>ns ԫ2c)5줉ucH$@fN/[GM7hw'yOty~v 7g4Y̼o.m(01/4&W[r'\Atu7QN`P1Vp }02Z@jvےI+ٰ- LSEzg1ꩻά[e&։![Z DZd"s,U]; v`5Z B:1{gZEff@Ir #uu9ԊO(+v5p<K5 b0kwf-? xY\~ObtrՏ Խqz,F7f 5auHg"šugw=Ax*>oYyl/~nY1;Rm"I l֚9;ưWs໽^!`a;) n{_%0~&m@rM2tOHkT<@_d# k+RIӷ-Xj`.5_=e{9hJj ryd&Z3Xijʺ?01&SjQ%g v2hJop($pz,Xtc;IuZ[&ՂluSmO"dB6āfGU[ J|J51 sx uq)*}y\9yAҝn'iU%ȇkU5*ٽv=sUJSD9;Ґo\E;T֬1uȃ,8G+붿0eZ{ػu|ʵn;Аo\E)loog0B+LԻw'7NNj/ E(+o\c!}cng6p^R F7(2 [Rp!a߭z+}ogJ2.zڴsE?ަԏS2tBXL8^#_Q&mw?(e8aS8au0.4FF9I`"2#1HgP>F# 9SzRXVV)t6eT!^Nt:ZJ "V2i"d9恜Γo@P@I*oOJ(jz_7~G6u&:g&a3 ^Ѱ,DTY'ɴ-8qOuZ7m]iRAI4.b 'NY!=$U4.hj`T٦DL[ɷW^%eL|蜛i/PkKKRːjl>Ep3y208^_YP7ϠahKҒڳvA_v8O=$͡iv-aگ||u KWB"JI:sݾxzF 1kW]Y+B$Q(u,[ǝq*GJ'wNY% "Q0HV,0Mf-զCMSX:sgõ %>< j˴51/yZ]=o.MW7$Ń#b,/vrrqNS$d5Z0,ŭK+8-ڂgЗa${ƩXm;yZp1骼!SKhv[:NOaNezP}{Ż\Fgn;JxH1eh~HߍZnf%u"PjLfU]aұkcŨ[q-4۫^>m߭ `l! u Y %JMeL+hw捎Zu0>ْj$B$3 ؠjA]zB,F\E hdFSkccف9#4dbx~SL7J\P0.ՑuJ}63%!B +#/1^LڰQ(1ƀ cop0Qz89Bq%E6JcJmr:3)Ō)j=7dc3ijW[fl;"r=aLQT qz06OB̎@ P<RP(v1gAq)s#L%) E$ s9%jB?i`VS10B+c&)뺞r3_7dv1 oH>Zֿ65WFhp 2foaq)X:\@ Bgoa*agYl~P Rc^ී~vvs;;|,>I 4%yՈN[L+fS@!:$Ǩ~J5pn^u?˼\fXhb !ӅB]QP6~ҿ*yQQQ.onǵNJv3 / :KY-miĒ\J_{3>ǝ7f#;r b!K,ΥP^6|d;e I${'XB:޵E^O֥¥ {T>=X V d{k(\BIʔ=2Ut? &Q'* dP1Ǩ؞FEpe1i4Y/͉PSP>ֹ":(vH;TD'g@6&I$Jevĵl,4 'S%wn{pihO-X_s~ݻOHbo:b9 S<Tcӛ)RQ΄ɨ %۟7co;*Kr sL Avou;ҡoWo A"ς&2P8ʪ^o%Ì^wO'5üάGaPlW41\rr@n"q)#:#-)qJI/vqKFnju$ 'A:--4hK!f#8vcɷ֏TI?k,8Y9gb#r<D9o1x5Wey}&dR !%,|Ϯuҏom$L߈S9Gb "\Sd+U?{Wȍ C/ݙp퍘m73vǞmŰ$$mxoQU$[[ėD&pnWOa5X-s#Z#o!rm0W~ nۅj?p~UӸԐU c S (H4(JVe5ozTg717\u@4s7J0H=|Z;Q۹p,On'!-V; 5r=$ -(6U T5$V0=w.˚:ɘȵ7Q`2]5\i*Yh{!X{Kx#^|`z9ʉwȁ[>i M;, mCoݪ7L^ >̾_ַOnLz^nv۽?;ޜ;A pgPNy>DfWnz?XU0X(?@W0I qHN*=<|ϦoW%BJt/Rg&V.Z|M _IWAP)~lH׭WS ^C!T(4G9 e TnqnJ pU< C qY4qTS 96CFPRBk4YUZMHcx%vr뀣#T!Q9EάȁΉ5yq<Ȟ @5wT#RF0X, =!E3yt/Ϻ:EޛYN]^L\|!/~[t,>Hc~^nFvR|l>&3彰e2}u:sv0+zӏvZaejHB^&T$֭fsB떊A褎QǺ; зA`l5Sq13DL,33`E΅346[iVqM@(q`Au9t# #]Pf2<77eY}(uo1SompGC)c5v翕;"s=&~Yf-|ՅoK^cP +O׿i%fLX xI2LǦ=,=fu~G0#B{Iژ"GO t'\Ұ_!2#Ȼ@jUޚNy(TQJsHD"E@#06V]ntb*Sk®|zxk't$FAw(4̵Pܧ̛S#Mf:vvd{1v1ǼK{|#^rYDa-:"s3v?gzş-tÝ▤O7ѧAӝsq; :3K< ꎋ-*8 Ջ7SUp8iб`kH‘gX; p aYιH&c ᘁ 3X© gJ'oSƘ|;mP/Jp%x!0 +_/J+N\qf3a0EhJɎD AjOMN5&0cJCrabR  zg5bPI>U(_wy?-fOKEr?T0_ #Cdߡw_~EA/I񎷡9߆hc ʪޏ{{sd:_@ZFBI%^~7*041&ofHc(dO> "K%@7 ^ĶO2)@"> 7e_:wK9S mJ{6Eoi,QrterGB?5/ JbrB9~/vښ[%,K.% s)e ļaG Ex?t\<"C-WJ] $>kH淫)P5(n%q;OA@?xuָ(@D$~mP-5(R侂5'J)+Ua_9k`w'}S^A[X؞s"w˴SMA)v3ΝF={])v- L["I\Jw2s۳rY()LsY۝h.k M?^&ǧ"9DSΚdQ-|YHbL|[Riq*MYؚ\)ѡ?M`x>ƫM'0_h C=kTS*w:4_jJfo,QgսJKUng!"߄߅?(_^E(\lʒ)Ťhy47S; N J޷]l_~ $v2#LJ2s8wcU;7'{\g yⓑ7Wx hGG޲b˾;!FOt8AIQ|1峊os-A26Ъ%2A(59ۋ]k1֨Pŏ_# ⧳c4ߋ̎1V xNA9v .Csurȅ(^`tΫ~|xտ$O ?JI/IM0E^GbtC`["qHkWD]vxQ]\5B8h5Ms8mIWrhB-&tjЁgGs ,˯/(ڒk -zM}9#-%/Y+Ep(o8ډ6^ܱ`{]U]6POkl^O9eM_].:uuΣ ʜFA5OޣE!`.]>ԉm_Jvq=̪ K!@T Fszj<^WAPAPAPAU}fo(FrX8TX71H9h+!"Is {;^j JY^@wPe ½)zɄ~4/_M1{{[hM?4Ƀ-^>`0Or[&AO?aPF4D-(&0E8 v2d&*'yՆh}K(IR_)Ǩ(^Z-.= Oޒt?ޖBIrk(_&2Q#cK-f-&*')`m8};2 HĚk6o\9ء+ 9/<'ІbQ}5hzghр( r_vMsq}[/f_[55j]}<=]~<zJ۞fXyc(4wtz}\s &c[bjo0HAWr/uf11_ϡCj)(bHL i$8bPz7F;wcљЄ[.)Q<(dx#+kn;lCۙ5wY^:5WIdJ +q-"8Ϣ$'EIq[H0F:&-ZHi d4[!)#w2 =Φjm(5P'趂BR [>?~z'ɗ;['dLi~:HG%'LL9nuB,nF2_~Kn߮>.xc}oT̀yHX~Ӣ l4/)qoS?BO;hvy&splGfiRfkrH̾͒s 7f/ӵp3+@spح;V(Ad hJ!"aU5hҮ{*0X@+V 2_=;Y<;.c`2&}p{\ٱ7i}V"4P;>"psH3ԠrwlId6Pn0Ļ'c!m~xIDD⯆?'MOIwQc4~2WzL~eƽDQj'ckQj91ׁe8i3[cjw]<10d;0R׽zxH8"1,sPseH PF"&Rg YFpj4%SIHW\V^!]9:YٵCk%g!bo= k'M_Hw+‘V/VQ.^z!!H؜=8u%PxBm.5#JvUEN0@͌m:a֛|Oj] /jLY?7cRDko-H<18 &eT*Uo .8m]ۛS=27#X%֒ݥvr3}[)6Fc֢Оl^,jҐW^:ET׎HՇm֥=`BZ}g:4䕫N;"2RK/eu$R Գ#reM 5A)Eqycwf{I 9y5! qIh)F)y|+֓1BյlיFqBK>vPoh>}<ɢ_a4/dG>T@y[:q{:R23yAzQy.F\6\^es#}^T3~}ƊCil} gqWDԏOxωDB8|ahW~JsU3zG=e_5 &a*$4$B)d!$pwU8gօyS\HHCJGI0 6NC"VStTpafOXR K˩;n+L z/mxZӽVϝ (me@â!:.#*GZ*.vxecSekBlJ/@jj\6}PgM-0$Tes/`/ڸ(]56[N}g0&8ypAIe/' iJpI%mtc( 9\ ݽLHaq™p.( }`+f& ғzNon({E='lY9}DIS'wX-?պڻcSKl,XRm}g1g^?arg7U3zN(Q]PLcEgj\k,Y:W]n:4gT{ENW9I(n}" J`8~ZS*&ka(0RChX0G !3o~QxR6IE8GHNR_ Nj{F{zV:,vq^&X2گ]1@^V!O|CEYe'G_{jJQ$;oÎ3ƎBbt Li DD @g10"x(d)IR.=+Րj=(WJN=OZBV"Ӷ\c~]g+3ǭc]secc}_/tf4lм^Yט;sg]cʮ1;[ywq(ig)Yd%M#H!3&dф05aq,NXl>_?[6-OY`tL?gߖz;y(nd=&@8qy/K͸ [Sϧtsie ;]BBɧ gE|/WM;-Y$a2v9FEɵ}*|B֗ӎZuMYI+uĠ[K7zG¹*2bs-$[{ \u!j(RжV(1)'%߲oUDt6Bɞn:%ycJ:4䕫NqMFB/.F٭S 3oVr)|~Pu:(0un'ݔ0^*rY.kH)>̲LdB"˜KB0brSl&VIay"R ri%^C8u_e;r0iYq4hZ NDKsV\~w"xnssK_ !ůww?q'b定ʉ]w?woI_]l鐨_lPURX?Z]tͼeزNR]/k^ʥabV{vHUCRpԗ4ϯx/1|{NA}YWgJA꼬+I.J*!N&1!!3E`Yf +bqIf#8Lgmg69:2$#BhB JYfBdt\-oWx_-QW.wZy8y0Cwþs!5c+`3ɄK9I&><YJ,I` LBFJ [qi1!sq*{g|7|A.p[ˡ^QN1v5QݤbqY\hI&HQ, 0${D4֔qsS0cDa,f 3"fgg J$e$3yF(Ügfz1X)fq *ƩTR$Uk %U9VH'))[2sHLb `06LbY1 >(:9ڬb@NbT8 t0gHQC fIe(fB:Rl$r\sFۀ}:H JU@9nu2kWߒ۷ -t#c/-ǛA_#[ l4/LGO7?BO;hv;%X=M&z+. +DJR7_BC"`2(2s]7(cl۾TڣVEٞi4~)>;Ǻcy R ͦT@5Q( !N(+\qUʒ])2rJťd)Ff:D*KT c69]e"aI̦ #MJiTAm\Xu:3]$)yd[sؠs?G`;R ףuO?!JLj{2XE{#̞4L2y|$eq/([[WgS$ƀ=-,cRk* O쫏?tt=i{޶Xf6a1}2Q`.zvswQrivpPVH#" 6=<%7({3ɱ&C$ø,Z~ ̝{gi߼}{A8J,4NC}Vj1!o$吷<Fdll-Fgh2ZlfNH="XL?|Ћ G&v~8+^A]R.#teæ`g2'FTp S 9v*ҭ^I?U;%Dўf6zeOls797=>ل+?bBrwZmDOY`N%ꮬ'~BXP:X$E23s#BCB 1=mB2r_N ̎"-3[ѡcgh'v^ñKL2p;2S\#,I"P,JBӅ0fB0ere/):6'8sig-D`0s]$[*ctl51ƛ*4xY5$ЖFtCލWfdS=3.W@ޠY3p=VSJAL]V<(&"w\8`ᷣcC0.NӶX c~]]` r(`n%ǣD_/tfH[$4W#Ύ;;+̢7 SVdF"AWX(՜8X"bDHL?|ȁF^ u P[+V XQ:̰x{*Ue*NHSDQ +#GEN2`Ѐ_G,.⅑˒Jgz},Y:YyTzQd0`02sJSR{PA8X!{-vԒ@ͽ (a! @H c`,QAB)R.e"@c󦟍Vm:wģ!fіq "%C()-NZTZɟt39qPPit`'a޶F] uji JF.R{.KyO:TPp^_!Դ륒ڲ:ӢU?5@o8ZwWTLkGgzdX?OފNN*?DU٥ é6 &h-{7|K~nfޔ'6*Jy?&ڑyQ 9m  GbTZqFXΣε̝eVhc$f7ܢzCIs;).3!Sp\Tilڻ.ҝS[HsR+2-wZt'rW';G~KAr55BeS5+SVJ%/?ӣ[jͤ'L XwSQɬ=H,uN 9Eud3IpX**utGz>S| 6OLJ<5SߢF܊Vs)cvBE yW֑vh\+I/뽋)MO d\GS25} bJ15>^jJ ϴ\e'A ִ{ ks"XB2н1ԫ @% { 1dDE^{e?^tA.mJ1$htVcoo5qp-  ʞWG1uqӫe KU u}mp;/7@ZzWB"*u Wwǿfg|D<+-.nm֔2hhLXϡVāڽq\n+?TҀ2N!I!E ,P `_S?8 F0̖lcGV9U|1'{K(4.v虄ehB9+@ceN5\frnAHSoh Un_2A, kFО]̬&ekHʞ 3dyhCx(0,,Nؽ 3͋އWrH7q!с!We`AR8C\A TJ [w(N*:p !~@ad@:>AaICJ_zO+*Akʵa&Ӛ4SSAc;{M|},Ge:+示'ޖһAj@Jg% |`68Kjr.*ZڭXUtQϷ\~R6ry-h@08JaQR˽+Nز KRKG)z[-yJjr_,+9IWYLr3X~yx]Z*.s(=FK1e3j.~Z[yQ.?&S4_BH?!\dyդxjYvƝ=~?T&<O}'_H&B!+0F;H O/놿}oHr9ME+(jrct1#9xC! 4*DrALNi/V=Vc NIִcyLL$7eue4_fI(m Pa *T$M k.&[s!Ot(nGW1%+P]^*سM,DT2 l#q1#!19%S.w#RAԔτVSrmyc=N UuS?nTs4umOJ10 ,W]! Crup%($JJ@pdTMb[cW%Ԃ$.6UhwKn̢; *f?=HQl֜Q֧i{ԯwwf kGfn^͗]cb_aẉyڟ[b0ݎq|p:凮tP!TNj'iUNpՇ<9Q3uM^~]ZW) #W-u{|Z9zf))*Ul؊FYmh4I%+6S_uCSV$0]/:c:RـVO?O|$Y (D{"Q=RK&j&F;xlthe?+W˖ Tx[恷<1x˓ǁ{ʘQp}o8)5|@pTvl|=HRR"z,_lOݗsGlU`!!T}p F7T"E/ݷ&^R>eiXg_'nĞ}ydGb&b ?{5zjo()q,E sAvD# 6v6أ*Ʃ5y6RbW%rWBu%5%ăRp{ !\D Q3ZI0 Fu-ݮߧ-:TzSj]_̸YC0SŃM?/dCw~/n ysW:xv?:imZM/w~Y) ;yhW۹_bXV@gW- {YWXruؤ' y&ŦM&q ޭrLkm i-3ڻuhwB^mS("EUuH/|y_<|~NӃQ3wo77SoRКy^g>jyġ4 E˽ΫNز J[j ds^zSj!Zsb')1Nc" ' b!v9,_V-{J_,? Er?ʜ?!\d]xjYvJԤpr~#͋4<+pZ)tכR#pTf9Xq_J2 +'Ԋ :[Z)#oI΍+BAo]i+62fMY]'zKj'0gWLJ[H5iv<@koY(\see g 1>xDžbʆRJ@%k^%3*$HgZVk)lRRzS> u=񲃒A6$4ʯMA>Ո$tҞ`0)xQ lGУ+7Du.|&]Mtw+~ !3}k7xbsȿT M [V@DS4r[(]hif';T>:${H[dd,fg "X[զ ҷs@ScM;r|?sꐺWZA R"P1f= VD&b (Ӣ% 2o?ivNu6}MϡDF| C1Ͽ(&Nݧo-xMsǻ|(ST 0JBK(g[4СXN.Lj=h@ӬQ18Qp^(J&+J&'!gLm5r?zd"40L<]Wx{>HY#s6BN5P ܫ7)1XtbJH"`͌gS麝Qf@X8m3q.pE@"#Fxmj$D敓į^ZLnUmP"qr( x+t0gпRNFZG96p\<=B 8GYHDNg~?( B%qH ¯)`ȑjT:ܡ4z(gnX <閳k" 6~hRK3$e腴$2Ww`W:DFt6/9j# ^]Blu<6d2GZě/?bdzxkRLN1#2d3\KF^v/FƕR򄷺XHh=/h}]-) ^RX+iƎCKƌ G,>r-aHX>:}K܋oEeL~Z*ZXkaчOiCڬ6k?7k}c-e.x&-V[KYA/=sx E],ɜ3 d'd%'oQbݭ }bY,Fa hkJ(s(@+q`Q5R_o׃({3iy_a_~$ov`tP'^f% Z#ċQR7wQA\[d t,yIhO,%Ԁ\p++״uy$ 99ʋ\;ǔPnNu/a{^ )D@ G3";ZpG)"T9DrqL~L9i R`$'| JQjq>(~aJ Sm3#ӆfVRgLA&L!z4ޗNRܨTܠu\9- \Fcp{'UIQ,arNJ%.qcŢ:6NK p')0<(S(9Z3*v1I1ء>UPU#iUC' s9m v]s"$LeP}.q\϶|znk)ߜc;g(B߇v;yoBOqb|\ ˪! g7Cn%+n//1;4 y=~NgMF/߹ZdX ۃ U-JrCFpҨTĨШ\@%hnw"|vG/Euw{r2.ɁRuZ"*7?in ,*XEԗ野ji`G|݃[/g˥qvâfYa9G萝*K)C27EN5(4H&1r$@3UdF4;Q:FB)FPLP4]0K1R\zZِhtcN'^՝dtH]DE˫CяǙϕ@Xy0*:ʄӈweJC%nFwy* `0]mNќpUs#f4iO6^U((TR*$*} 8Ǹ,ޙǕyar=/%L R%E;ިfUܧLɘ&)VpHn.'%I`Jqb+F&)e8irAcl<$hEEs&4[=磉)Ya5}ApL4I4'$yIn<gΠו Gp|yiaxӜ Cs#KBEng5ٚi2>lܸh NPA~T-){.G+۫o⇀ū?^bAK#'qld]- >\~MaEC->@!(m 08;ytdn1FQiQr"{,9GrW}Zl{뿎 ξ~I Q?];\{*(_rN: 5W\?ݒQS|oՄ@95VהJºMOM/.}V=P]"場?M-ſOʫ0l>W6}=_KcyE0X^T墅_?? v0,ޠaJW$` >si5&?C+Q>(@'w]R}GW~$y ݗm݆}.WoѰHO,Gs" Lp?M3oI\CNm@k;?9$^tpO40V9 :ƥ/)zr.AV6qK 7tHA)K^Rp).)U+0:dժhFOwl8 E3xbQك7dOwZ]c0c1zxNN4;_6.Fu^6p4NӤV1ͧw>&8/&N$N•ْlɣ NWB9.x #EyHt/ip|^&wIZ^ԣ4%b}tzqE[UޥtNT6H LTM> WMή$E1{wpЗHBQ ֪$.srsz$}25. UIMu{9t燍kG)`pͥzHJڮ'.s|~8Z(ayFx!)kIn]RE>ꥏ3{Hft\O gñYV6Rx&c?^zAyGz3>Oh7zҡ-tio[KDڡ-tU.lEW{,vK#b#H]٤HF)yB{IJёqsTJm=Y9B4JW-(JTo+-TY @Ҽu" io=2Y%7ބaAn[o  XE]0LjKy_7!e"]߆y.}lP\WyH\dnzqc̜d;y"/5ypeQ~~Ea}KQ7isyFr9<[KZoFysu_lUo915RI\DH6 VT;]dzZ,Z&i Xa9N.+l·]]ZR\G~~hJOJkӖu٨sP•ؗBLVVY tC@K L7)2ԡmˊL$QP+؎u 5>]o?b?bjQI9iSI'kfUTޭBw@jF9=ꢣVQNjyٻ];QׁʢQss&r) YRkeΘe^p0KAHϼp멩"p 00,r^“0 J*d|;^0餦 Vj(s/ 3&  DEE/f76l , |ov`. AW|vUh{!Azƌj:oM؈~=||}>dɌ?O?thywq 2ۃ Q ! FNg]=gJscP}z{i:(j :#F{.6 >%[ Ǩ5 ,NaTΚ\5WΓ΋kAH%idLHN++ \]jtC Z+exSUBz4l}Gz n.5)0;5֋GyCwUbU-bI";,w.ӷxsP:,z56gޝ\nys" N,{x=cR>MdQ]=c } *&-TL[o~CE~uދދ3΄ԗ޲ vi.swCP&ĶPDo>&ݻ 鴡Vbv8IW,ЧHpt 4#T'3ṽ_Ṵs0j5 %gFǻeD@+jm~$,Ƽ!> )aT"'Z_u>L$VHd% RRMdGǁ^&g/J3Yx$/u+-n`J#h/V"+1Bx9Ungɞ-TLbW8Ƽ=Ӗc4*\^J',2,[exg\\qZ`SLc`$n%VHZxu֚O/ ʤO f:Η=oo iu!#Ar~@4N<4 y=~g멆=߉;(HX?c?]’ "0^B_c?"&BbTO`k:USc@_K%FhIB2s3_rS 8FNI.`D?HՅ4AP#t{Z'-hT~O2ˊzU<$4u_OǬ|!eo8&5xA'sJ>}3T&Jxٗ p>vILaEI|Z7A\j-J}XUM_N*nQ^Gŏg37y)̄2$-#9pJZ-\1 8bbmWTbS8IoKrěJ<~fb`;zM"[=ڊ"'*%7m Eև^qnNa J(\˄+t vAޯEF"/iX98B^'ō]A{[0VҶl?Z+yG=Hhttg6ȔZft$tUMVs͚ιԤ@i`Rh{ldF5fc"jCP۾J%ZHs*9=SrI 4uMc;=3 NJ6)W.쇋.Yّ;oxk/MyI3Q0钓pe-IQ2$YH[OohlBR#>ccOPdJt2rm!XnZ;ZYD_{ f.^:⩱~\YANmcY!2׹xܓ*mo-G*pv:u,74oOIL5>}EIzGJ vQQDTIZAb*~PÍW;qfKr ,kӔJzI-ݙ㧝ƅ0k(עaoՓ@"$nmnF/l4*'T&~Tʖ]KU Aj1֞:\A" nkuU^w->p9=z KݷM`; {Jx{{[w Wy̻u@ا++ wC ew[|OUٗw*{`/;-LG.:Kɇyb!uRuCNO˱k@@ꃃЛ\׵{nq.)\@p;tRQ{nvW W02WPƌRjP+@GT+;/ݍꋟ1PS2N@> %G3?3?bOQ>3 R:IiBm4PHS`DphCA8/ T5X$ޝVxUx7K^Ψrjk~x;3vW>11m9~|K*uۇ WW?o07w rHXK~lk$ra'!"SεͿ[Usц#Џxvrs!9VP%zEKW8UFA}yrEX}7Bji gߍxضmQܶmQq۶7 wA34TJ ``я2`Te&5ʧS݈5ڍh:|¬q2ร @ eej24B tD@a; cL3cEWyY K-0OQ0uM: ?S^@0T:aBZ{Ծ8LL ¼T&IPv* WЎiW35:t) I2&bG+CEGaֺe0i-d6X~KH!`e=$,Ib~ƈكE)`I*WjIVot45Ӂs}? 禫5稫sO^~0 1!tO@4JjuO`; Δ3g }3(!ijj{ xq}L"z{ K-}2Kd=IiWxnGEv蘏bRmJB"±b"(tVⷒmLtJ]*+RLc΀øa$8#q+]RYhဠ6.QYh@ڪ!o0Ղ2;+o+x^*dthytu%oG NpvR03.*uM4n2YF0&د#f1IԱbnKq+&v눙Ŷ]Z_GB3$s *gpBz 6Cs/4uM`:J]%/U6jպit8xv;6} ! Q#:~lvW/&rpE߿ ~7i{6~\$?&Qh8u!f9Qo'Irnn_\d{ {s;6 )vއ抄bYUૅg֠Foq b5hFA?)U To[V]v ""Öxbl{PĹ?=}*g'k1Ct1܇yij)js0l,y}qw憵ٖedm:KQ!A:oP<'k[ xQFG7fqaEtIy\v6܄f [Vț M+3a\E~GL=({V SO LBi#zMC8FRR4]Կ'o ǿG>ݐ[7p-&5!V m0)Oa:gr˾Uoȅ*HFWuѳKnwx,LbkM9Ovz~=vˡϣ]Tq5_g:1M m5oWZC:dXd~+k-KnIoY!ʫwwkCD.(s&?nYYxc#a=.u׳aTó6W!T>T͊iWCV|BTh;~0ςo8:yYvgq\|O.{)@xGgGM-_Z22'%ᅮ~r60mKI/?9_ؖnK_A+BMhaWh~H `:=0pKQdbi1YCT63:]F )0|'\xkl&#8My"ųԨ__,͂rGjP[(g&aZ 9=mܻ~ϭ5ۍ*fZfz` 0V0v&J L1B%oA 7S801DӔTi%XBC>6Oyʤ6 ٹKE3j̩` @&S;kXUb*)[0. 'M4znRqkõ[U@׃9!Iw__=PNwyU LogRQAz۬cKQ7qNVS]^VFXNjc \j0ʺ5X+^WV!Ưv5z>wbEWsfU%wg%(rkUWf"Ļt RŞJBQ;A>~Y VȲdw|2u8ĸc%ݗ+ʕtOdUIWc#]Abo΢[ᣜ}? ̡mD6oż`_et(950ϣƍm9!*E[XaȠCNe?;|I0ܾ!W*1?r*Iz٢oǘv(nx~S*! \ˊ)Q!ojf:;ʵN;6ka}&ZZb71f'nMgB6N!F~9J/Oרչ.%ߙ3^^ZT6:0:uℯg[,>jRdj3&ևwYk(LDVX!W dr'DY-j>VN8?ԁ( F;%Rcz" hl0XL}>KF7R\J5zBWJ`GL;2UʩrRsK@d@ggP^FF7 iv9fٻ#W 51 ,<^@ zݗo$YŮ+LXR T"2ȈY lY+ޫݜ_)5nwŠRbJUʈa1 j#WPRO WeLRJQ/ux1;dqg~idO.9@(UƄpz`j2FOTyCXsVXwvXrM~jNio|F5WⲸlT,Q,b K1-/~CxnQB%#'+tu &aa[  Ah]מOҗC -gvD<ē(%-$)҂9:+(,)=38%ͱ[DjBqE \DzUZ J-;LvPAcG28,8ZE>ܽ`&YI\dnק"A<-.ɕrKHW6l.`BZ }@0xq4?\/93mNN{6U$C7u#ZpRJ^V~=9jUf$$7mfPYZmV?8D^HV02y]V7^;na^ kZ7NxJIɑuxCI4r1Fm2h&:woMJi1ׁf3Lr7Ԝt+3#6wyJx)^(_oG!Sc=|(wL[&> )"9:Mݛ"QʍsfVQˍ\REH~$^WS7)eApBYGb3Tr6Z"! gt 5HSnmŏAN * 8.8TxBxz¦y9|(գdE+0)[g:ɕʳϩVboP^#GR 1u$u=TRm*"%RKWa S4,h' [iԎ F9u z] `g)`=m١_ :ASc D9 h}*^ϔeB[_֫T"_8דC3=YOB+{aJihZ/5)0H:V Yzm p'.1nJ]z(R9ALDU&la` +ެ  TKAةTX5NzLUU yM% Y`1VdaKK]4!G.ZO h?iLV9&D};;AZz6icbvMBSTf.njf /w&wԺ쯷b- 7W{ ֟H/og 7nQb9A vrrmMgs5#cP/7HHO#JAt+LhLVT3U뵭ݴSnĈN3h#z$ !Gn^v!!߹&?=R! ^1DkTTΉz /6!L\InД@D*ykZ+mzMG&ʴ,g;U=&ɥ?0,<8#fo-MA;j~w o;oR{M.\>`5XI:Ff^⎍ʰ)ޅN|z=zJ5c_\E5*3g8Bwܕ3jN3:8pytX|6 $s"L =RG)4 3s@>LѲ Ljn~g_ʥJ]/F:n>_\]b*'UNVw0ۂ( cJ%IYXOrl=%$g`ّF5۰4POSSzXam,ņ'@pi[$(쬱@PRʂY <.78K Ho;V^Xڼq*-5v,P=T3Qfڶq~sxxEJB3NBNWnjFj5Pn- =Ϙ44: S{*xx\H;ŊΦ&hNEMi}vTі'QɞH2:Xaof"vS=Bk@Vzj-NfЊihSX膬#.&aEߧ;\hD[7՜n 4zn!|[9G( |"qi&;|!և;Oc(kKj% BsB]-m^iU'c1GK@`9!BSdGK.Mrk 煭Bu7/nm.͛Od͕2pIʆXMfT:eX)'(Ő a1i(|XMWt[UK櫖.QǶj+2h?mߏε?ǧDZf擖}i@z$?~d&=5.{qLxQ0nwsdy ̳75b8g%Di5ך͍a9ǯ Pƪz0g \Vs RCHeȻËNo/<.%$@X†?l/J~O^z]m\,Xu| ]QP)?]_ DΏ|DiM%N(*3q0\2T1}_Y]8Qfw4]\_]]VV~⫘;Z=;iZ̃+cc`u6vș"t^U Wsk8'ybkLY 3H07 &#@zvHn@w qUCɕ=+K@k+c4Ƚ^W J zIJW: 1?-UER%H>pGĘL{^Ē s1lK `t*wL#?y]ӔSLUb)\FpOFK&= + B' !MlP`Q`a.Y^s kPր;uQ8ؖ]P<0 ^:X B@p&%LLUUE` jS\ta!+)V0U *ҧL(16\V@X}u%XtFJ!ŘRa` s7ثEd=5Ń`8C(&LVF۠!B[(V ~AG²j{h`ٽp#)LU*5URqqPS6EEG.$,sG\s=A3!q' 0tzo&|'2HdF#✂o'"NfPo͟D}D# zϰI- Yl8-Rb_Rkk7]' |NLFΕT9b=,J Q;L4=g ;$"04/0h@MҳT B *5*8 #8sa)ЯK_)(ue^еH|r=ݱ;EB_ut\j#? UUS2_71)[b>=*F'(tj7Z6}=|g|i{F$ȟk^=4s.gpn~]盫K΃?[ zV~)/{ ?$@-0ouF(r S4':Ql}Nz7c<6fp%1הŜ8):G }],oo\Q.+}e҅DʸM"$K̽΅ ;”`آB) 5VJp1wK/ Y+:[;@$Ȣ$;.)zFZVo_ {ih:8[k~ HYf%o9`*6 W0C\Y\? gLr"1(BsiF][!#SQH32aqY~s k[h*ŭMk Ӷ.=hs*2w;IbF ^1"2>&"֞G.M](D?7<$8UbQV`6N<I ELP9Hf׎1_/dVJW B&JWI \d.0I h|1g}PO8C`{ jDMk 8B,bUad.JS +J+/eE%:)%<6(|Z)$v~1sy0 ˓n>8ߙv8_o?6naཹ`cAۖ9U~_.fm(ÒaA3À^[?XNH o> ʏ}A5Fކk+Oۃ0tEnk#=(-[lR}2Iw.eJL2g~O,m秐"$  $=g?\7fOah \.'Azڶ=;ut[N[[2{Y^äiCge&dr-O1տ,3ze޸GpsSn ;sS^DDLFn8*,6C2rR.|mԱhEXGnVF94sR(mF&)nn=r3mqフ\?=^,ať*0 bmN8i؋]ZK.& joZhiGG$A(Z|VÆ&WW j<(`[$"+*LD8;Wjڶڳz/ &Nc> XcW۪H-tXxTRëޠThi$BJ"|C6lzQuVx [0䨀QdKBe{Ip*BxTjh/f=)+f:*  Si-Sk 0F$&IF-LHpZuRr0Rp#:3f0 ȧ.YNeT My]0gci-$8H*YRC! );Uj5%ܻ$'[%\$Xad;r\)(2!4jkL203E"R1TFb4i*7*g\JE'21y 1bĐbFnC ;O~HLf aX`sjA0 z'B#%~PN0H-^Jv+ n{%VĔeQXׄci^6kv 5J=VnØ`e,oZ-vwf?n8WOyF#Ta+R/=Ȭ4@`1l-Y@m̈A:&EwzPKHE_vŴ`S/b(Ϛ JC?n%[YSVGb8\W)1GM-j*ra׶Ф/N.*wJB[Sڜ NEH.6cӉYviIb9;6 !Q\T+]RzƩ:b!s+maҋpu) P=2ZG|0T_J Z~}sU)Mܸ'D:C5ʼn0 9O&)OD 9[JjVzpdFd̑6 \30N0k "$Da'i=|: @ڌaNF CYNg Hh&i-QD 4QXr=Y 6= {JE'bqTnwU,LLc2 ~%.!eWML}C}"X#D%hNx1f2I %*NcO+9<`'!DŽ^V̌y-sJľh@x뤭sƇ5kxTljcx&`;Q&[xd]I>Z=wAXjAU|~Ԁo[R>bf԰jћnY`%{qh^6dž3гޝoObB^K)49'sxk9ZSwPSaXLsn7u89}sUH:{'NAyUj(qqNv޵pEŀ"8>Yys;_&72[9/gkƽkԹ0(ضlM}L]^Y]wu~6 =aEޞprɴdEh۰B)F' H833tQʮP`%<6`ˑXo}+2u&ZQ0Y8a"uSamNRjX`A_%(j=| |V2QИQGA%df-kAenFL 0* !)* 3uODxl2wX'ƏUp}#EL}N$+-R*o:uA=`5ÉR72`?NK Lb_8 <%2o\>Fùn\6q!{l4q>8!:FVm;[*}Mʯ{W1lhq kh77e>O)㽿 }< sZǪP6*U9nFUT\mfOU%xü*@[5y:O@] 5I_iyJ)`mUuG.Uc>7@KvLnށ&j;ox>ZR 5Z>ZT}WavQmUqHUHY`(\ h ($!ҢTƙL \0TGU CGEO$Q!)\#3E_i.^9d@%ӽ]s$$1d%wD>;J? l9/?\fyFYIo2/[}-S멇v\v:]S,<5aE(IGqiFhO3,7 B(SsEn:s˹†hQ9M{4H.5 KԸxuo-]jѠݬ-]{s?յ}bCgpv%MB.($AI[n s(&{谻)BcnU)|m'][Ǜ8G;^g7:-m r!NUo&6`ěVJ6` kqJG| ՙ>b= w_z w*LjcI8&%),1&Ob:4*7:_Z 9Zg0+nߕa\Ç|ݺ ?]91+n4mc;/ol>_$娪u͈09 q_?ɻG>!\_L畉| J @(\_!EM1H:MۨwuJ7nqukc|C"%aX6q=v u  t溉R=7xt=BP|= "tu4*5$%o{wMVx'fP\G(ΐm)!*ԏ= %-e3y_HDj;wpqmoUm W񟹏,O^27ŵ^]ɷ?>lv#eTPF;4BF hVZt&t=zAAR Lj52aXR=nA5WXp3]THVA|u QՆ@A+eҮyDЁtW{we y SdZT!KЏW[_E 2^*[:K2&#,4}peM78y_,zL+ޜm5v@Π-ʹ~7*Q(u4"塐1#Ʋo¯|3_8/KJzYr>(4`L%P𦼧LMu ְ3E;`ޘ>HwV!Dз߹$Jzjƒ4**vYt0|I%BKA; FSG@Vc3]!n0fW,Wa ~H2ns+QT_q4U}mmqڀ4G:*]P$042t&z$# rF;IBђX' < >¡sR.o`ګ7DdL>'i_qDVkZvD7N$ >1 G UƀGURBs ]qLi(Nri* jbRh .m5ӼT:ISAe9>>+v٭-`>}B'~2ORȖ6DJ!44\܍0CH|sz`+~Yc}{Z0hױ'I;2m!~0(f# - ;hĨ2sJ#@PCpsbO27vʴ'1ׂ၃:QZ}"m_5Ʉ@Z*w}|D4V$}Z<#glRhdvuӷodZqh]ġuEuh-/oDθ>\w^sc*ZPZ&/>c&1S=M(@";19'*,{e,U6/ 6Dي>Tu_>S ih/[c\HS.MXΤQ.ײ 5 !l ߵ D5E"KlL<*SEXqK2u/k)>DZ@TtRVHWi/+R@k. V\Ԋ ?'/~B!w-7ۙ 6~Rb2-{5M0w?ʯ2Tqc4UCS^reHH+8Q1W(y_z_'І '>m<[LM'RT$ֺL NE]h8G(r,0qz \HVk-kI^s <#XfwW/anJc[f)0&) )g, K^击p?=~I}e~3sRd~8\IĘoj ]9E{$OWzCfsפ0Ѕc_[#V!NP[כ,*'[.WzxM-76!^uU/Mgtq/˔[SF/|C ~Jq%{r'*wn|\Ŀ.ᥚ<5_O[?r =x}ͪo%%[+).6ZAhp*:8#ݜ.> i95ĀVȒ LQЎpT|W5X \[ԠP[ܮ8|u +X@>W_uAZ2n{-WG-]%8uE@7c#հxW>@l{R,\=$݌]R/W{oQƁ=-7rˣGwvyqD.@jp+s^9^_~loe}w:/ jSWy$ɥ&Qwу.r4%&>!J"zv/yi@ЏiY *p]?%}p!(>oУ,h> 6HcjcLGk)#o~b!# )ŗjdqMB% iDB&zu*曩z!9b/Eژ5yF<'H/g Z>m:d_RNIM3sX. Ռd\:1ŀP*VS*! kN)yv:_p_ة3pfK+LB)ivRy;Necjئt>hrԀjzfr!oȵ>d, [=D>.vOO=\k8FS_F;p=iT(HOՓA9\׷AoiYzP9QiBwu))g/mdC; ~:stȨ&cLɤC@eG*`qaDi !ޙڨemZ[ӺE. hӽt9ΐ5MÓ;#WtqDz8VYE?Wxـ3d%Yޛ Ojn7{wNvr&i(6 4S/ uDh=o(Ȅ,1k`.-Jʂ+c\T;N)i~ogUa(=\KИMK}bqcOHyEhxǸG',AfۣƝ#N^-)9ē[J|/P}ܭ8/qCɬ^ҶURm?>1R +BX;b&΂ 2s-Uɡe/XB-c:HO&[vܻKM568ldo#T "2@.AAsLY%S.E&@;~)(`BBNP l*$%VȔg !2uBIH &Zb)#% U 0N+Y͕EDO=w.}~ez1y#[W JNwn'gj5n?4Cy=Oi)v^;T]dѨ.V˵cŲ7YZ= !Ad-F2:_a1C^/d.9JPa8P[+ly$X NHjԊoeKKfE£J18iѕk݇ku0qΛ9ozbMO<)9oJ;kS8I!9i'4zGwO'vPJ#[1C ă;QWuOt3|m`" U]o-(CB_αe3%-4\};EwpvʆcZXq r4dzESKUY֎LU:*%O sٻSr /?݆$2|W7s;? Jni}7kb#{.6ƒ,`IpcK$415i4HNPR/:E {6Itޚ Xke@V TH$*$c.^CF:mʈ(:-{Nc ~2Ï_p3 , j"Cm&,U"+A4uhA-z棱N `pB N4 j$xh+8{wgY]%H i^S V9#ћGHcLk|(fEZ^L$`m Pc. BT->{T53qȤLr4޵5q#˞fh/VJl'ه RHJN=$% )\8c2"1kt7@{t.^$ g{]`:0 m@wI)hҸ9^bqҌ SQd ?ob(l!CoZ"b8'0cɣ(%ָweuޫ֚.Ņe+ٕusW3$HKs`C뮭ٜRp!B|̾Dt )=uF<\µۏ_2YKSlZNZLjMIrW8cXK<·\K23Lފfysu*b}ښqDWZo^W9'Eg_~Nh+OacP+2H#t- TrY(FXh{cEV@q<^ `S%NDW@`A$nxT2R-Rc+k4Vc*ç>ĚFOQXBD TAH! i ćTNmqŴ}p2WߘS>l}#&V$=>S QyGOux/N!{}@Юn zCO\4yr&|Frk3Iab0o:IzY!had]zVB@pك^U1I/mDrrJH].9B.BSq##vPpxkM;/5 $<2 (b(3Ҙuv.JwE;a_T>u9qxIpQ^sY>fY%AM]q^ف:'TsDQW$C\JrPFk~l7#.m,9%[@^ z9X0Pk\qqhgReg6B\)jJvR4 Z VSYR=)z9w4&렐H qN:Ix &7p:tá1 1CSNBPRNy ;c,1i0) EAjTTaáeZ{&r1.9q#T8E*Xb{)ARy :b)K_aqկ~*AUUF܅.!8w48 cԥ!l'DDPKFj@ip\ U2$`zXx}/15_{w|(Bb,;BH((d:NV iXɮP"1qY(Y }Z/6H4$5 YLt)"#,+ ^ɔ+npExvr*t*%S{M:s|V@=GR.qA'հ7e*ɞw<&:b%=bPrvAT:|^?f۸ǫböbcn.6vQY)Vqs~jRޜ齻O'c[zi% ޗmwe:bݴ]5辎QvvE袱-RdpRw4GާEY;4#!5s{/&ƒK џftt@@swޮX59q1|=nvxў.ϱLj4rlwaQc`;<"2Q兕Ky0dG*ѽH/2?eJuW>MF:'Zϯ>DU٢Z_i-Ow;Lor2?gG?U:-̟A5rf=ZhՌӎ=iSCagSj^v?ݺK-!FPQ4GLcy=IECeY gOƐgϷD[ӭ[XVTե1rNJ#j=wwz L.] { |`#mG;-=09|~ylqqQliOŬ[l@ZP N-۱lB /Q<oyFY実bCZCc19ֱF1״h 0,I4 nY5iT`@C( ΍T(&n4Z=x(8`sD>ns)b{ogY̯ćI%xgf Og~0O_=zn$Á^66OWY"m$b' Q3ͺ4:ӬfZix5[?]:v5tòa "F[C/1SdkYL*›߼-.au蟏b$0tr(i -+I5'YϘlXېo޴Fm! y9{w}c1Ul:~[e?Y:G# _eA{4ܰNe__G (ջ{YiI%@NLC%*pn9}*I)ej8z?6AFϟʣRQUz0~J繠♊eaHGn*E}]9oegf74˂%_ dKjVX@x-.*eߺ%|Dg6|HnfM៤A_hm~sI9⨙ W}B!2.T"T[IjA"֊.?[.KEs*go,im1!q::;g{N+_DʔqWnGyH2F'B)*PyAo /zK* A*R Fʯe LK<"KA͍OU$X[2 ;qHb8'I-nfIu#vB~1C {Lۡi=pkg RGg|]兄쨘cJf$tS#=rC\.d"L D=NJޮ3T6[xpo]>YEDQ'.9avL2Y[cπ*HWW?9hzHBMIĿV( 2Br,/ѰuŔ 0$qC\VN5De$D47.k"Т!:. GAUSe 0#u,𗥍( 6cBɮ]:$e1 D㌖#2ԩDs^k\ A + У~Ž]u~EO˂^%Wv5D%iИ+\3 sI d%)7 T4\ 'g4[/CsBI?@ JW"˳]HI]wdyv<0W|Hg @);Ii.ZW^|YiMdx&yp7}\=Avp _wfՃWWʊ\Dտ~~wΫ&z:WHPcZyy=šcOU3%^N{jp}_\ .[I# ~3/~駲WZɛ!T"acZ"K ii96%>ƥ* ir% ā?aƓGy' mGֿ+4Xʵ#,",GZ2I %(5#nRuj9HJ;^?og3PFA] {Cl,~6Yj/>K?cW/T.nI"$W x;q;Mx+X,~2T))NE X`TaόMi^:Brm>\e5FJᵧ*6%-]"x? |X!ivS!Qr & 3 Ѭ,*m@N0&_[A"D:a\9A7#8)A=VҨ{,!΂' , `N #A)C@r1!+ӐR3H`)рW 2KfwhI-y*RcZj Ypv?]oSȲ<6 w̌x͸3( K?6"~,kիʦpկwnʒlƞʚ[ iz ; ΁`Y%* g"C JiUcکt:%yZax;p8Q#t m% yze CFYJ4u^Q 摁`S,20K9T >)뜪<gRimW 6r<r3(M|,`p \0x!R >TT<\92xr #X$*L^HԨ+w '.@ cZBL tN*N0( wM܇(mq(c0U0iA>\e"#as3ٶVbH)*phXJ)fe J]!8O$ui}{DwW RJ\Tefb_Ǎa\nabop,bI>m`lgyE8qoQ̴D6[Daϣ]$/7800x;xD>&n` ZZ֦4ހU8*)R#,)FS4 Htvnd4 e)lm aI4,KFIRyAPĊ0ޡYAO+ <0ʆ.3uYXk V>BfAGgPQ\Ӛ jbO&.m$D)x(`BY۞",$f<$/ 6e3%HeU)\vj!8Ur`s ,]C4`(2Y{qƣKd)8]益kϯN)cO]S~밁_'ɮ1A);xmuP_#)'iJ4G}Jޚw;E(P,(IB+LOPh{EPTBh)4;H#so+)~N\#\ ,~lʡ8)g>s܌wϣy4apT D(=rC5`-n:bo4+Csy7ceYJ.f9ɖ]~T.J{}HO.Q; 3PAǩe~𵟯:^]^b4!XN# ġ]{3-.,{]ݿ/^k0s0ںɫ ۝w/FS_>-lC|м5 zo_W傻r>޸ro~j','3B=WJju\v~T=}uOkUu>j^f4DsVBx<$ I‰ܞ wSŅbMX7l썩4&(cHRlDRu=݈,:G,s^gt_.SJqJ>Shebe%F0B:K=S{_2I+ /JB`Koc [%57S˹8Y3@_~ 95\r8Lu^kyX6,2v/g-\4p^9 BJp,`kZH>Wesj.eP^vm2%?[9](oQHf@Ɩ W:n$#qޫ*U& ~% |cJ eN%Hz /= )h^P) \\r̜II/2呦 ,A6 tD+y/24UHڞ܅O%jtS˜jom"Apv;GypOwr`[,A@C#5o Dqgʋ1%̘rP Fc"r=DU9Y>?v` J_ KMU+F -VWsBwgZ% `[‰ݧ1RSS!Y[ Nvbo#^jT(! [D` 1jԲbRj$Ҹ\uQ?ݶxF[pj|rDe+Qw𦝜Yr.@ۀ>M@_ȣV!yR̔)Ki4QxxƐ! uyAdx,*P1z {oo?Q)>t-Yn͸6,6zdu凿.~47pe=T>D;<;K{F},./kBi3 뿓{^}t>.JAf٫f'Oy1ϼk\U靯Ï@Hp7WxX_mm6MGN[w ǩXi7 !E .WdRƔR1-%W{oHbZ̩%4)J:qDKd&U?FZǯtoc%ݫ*Bon;O`Lئ\DBĎC2a퉨.j yHˊS;P|Y%C7)*s7vH&Ȃ 䙉!0V<Ĉ_$eID0kw_XГEJD$jbWS7V5 1d.hZ=[ _[P¶CFmێBxHaVzwUe38Ekn>d$*45bCwq"I4ͨr~G\`+$v̢ Gdg}ôBA!HuTDZWLEWg-z1ZJa+Jz,IRJ{cҞ4θ,?Z.III^xۑ=k#L-~)*MA$)4#!&hے_]"ނ?O&*h<#Z A}a[m c1iiձRZVaxFZPfZ|!;mV}*NcajEz>.QBvS~yHB) i؇$v/VO/c8H&5 | v)0Q'L&%Z\2)`~ja زR@iӆ7[c-׎!#n 3*5J a.KXV od]erQz S N`͸s+M^i05qSfU>4;QFΗoJ)[w?o5?'M,[sg>.KWFؾ]wk(()XECtiHJ*JshT3z}LZ ihFCPPȣҽ(1HA#lm"A*6^X xsU"۔=jV7{$GuHFf匂JZ)J2nskEXl5";zU]0͗Ĝ5󆈌-{-;O!N1|9_0:.hLcVZ>T˱XYkK{xg,JWF$&?^gxٷ#xZH4]j6+X̥F_% Ai5Q|==5mҠ)MRmǹ\E"ԡXՒv^@д# ot68]iݻyvhѤ$h:38Q Qv݀XV@߬K6g^4FxHDE\&ZGl#וf;ݗX#A3P/sp& @"P9kP/" q֞oŽNؕ̓u27 hh8zY~:N_B1hcnWZ2޾yj"^oZ9w/%wb8#F-XED ILJ"co͙2,01 ҹ WAg yRhtTNoG6hEg[5i.R-{VDnj0>Q|v1NcMKsq@ᒖBo8/qZZnqą+`8lVQ%sx\X]Zq 9J",ri!eA5-QB"RqqF/^()Jt&r ,rXK)&Ղ{pG O( v(9brUC0ww y:&Ԭkg~i~ux\ X… (ӧ_D(Ɵ}n ~'>50qWz:Hxxss}FZ~\N=A:ـ'TݾWN$boocM,$bf(\S0hMK?fT֎2VCeœynO2I"Lp}<Ϸ>k;'N(s؀wmI_@)>4܃ {_z0X"-?Ӓh-ER"Cp~33;ߢ7]{B7[!Daߦ#]`%j8КZaֿKR_j֤-[a+aӁLi4$Y 0#1XIFw"} ]+uƖ0O*dtb(OhD.ԌRv 0`%;QG-QXi}V/C\;w18OG\,86!m~:(0QꍷT3hlC42b&3ÃD6Pf2t..g@9S p)]xrΦ7c;'WFhOԮ,h#;L4v ٯ^aΗy9Kg;Td$*~xO7H֥~=ZBhYT6GĨ˳q]_wCD͜rz/kQ6-l0'4a(&)BB#6Y\ͦfW1#\mxyLwkikB oE'Idxi_>f'ga9Q=\>13^Fs)$"2+†iYd,)r;m{3PB%_9pqcNDỏɚZ99ؙsoORB4T&ؒ; r{4,*A( IkBqHb*܈š{Zmt|f cnY]+4E5뾁:^شug 0vVpOބ#opOUno?UZ?nt\H !\qkp~?]<X^_]a4!XNkUfwYdWf]MWOMw?ay6QnF;fl`æj <"G<`0t-B 9WaFU+ni#41a:(ur\$XMj!4b2,&ħ KSEֈok8)ܸ\abjZ#:C.,>C,_Ilh&F=>l0\6pl# UAF *rϣ'r /ҨQ%+:!th9vOϜA+$W$vi|jո\87*Ov^zR504}K05{\amT~/1#0FM9\.R reNN>GǕi7[JV-I<^RVrs,;Nr)GN 3[S[^RUTʾ*(г'/BFѧ[drc- -,w2yX%4>a^&8y2}S6s<;BZBR,fäqzJĮ) }bEsY+]*NDKdPc<}aJ\&fuG"S8e |vўGHo# Ua_yaT+λV7P{`nsAg $ pJ&d):s{^"Db&(Gq#:\8b-h VCKփ8QK#Q%|BnV&ʐw[& b5XhK`ˮ=.tS,ՒJ$˻w:iUr9O@ٵ&& A4]i3dRȖ\H4$梬kfQ^DœFnJ׉K &>MMeJj0)E}2]'brp]lTAb)qTg2Cې?eDV"RAT!ٟ2EO#$,\޵t@w$?7eSRh~`e`@]I u5H ;I]R!h(,4e!b ]-0 _8/(>Gs܆s4mf'I6Q#Z-%PPF6 HQF?y5zz_[Ϟ5՝zv}e͈!oZn=H_&8FӬG1cjVc +,4ϙېɺ ېɺ-fbDf 9!yncK3%t&SK*㺓-Ɯ d=ʬp1cGk7=Eh{eF9b21ܫ+yuG1+O,T.t~n=6xϷEh?љ, ?2ŘfeAy4SU2KĤDgɷG0H{exK#˸:.4Øu.\ c2,4@3,`1Mj%ŒSF(HeƭlVfn:RX đW Coq;_js *) 7}AZ)XGD p)R-4~>*"Y|X|+TPc wŷ(?pX,t[7%wCShR2LfI+S*EC  L &92S$.1j4s0&hr`':RɃ\0CCЅke " J',ibi&Jy&Pts{Tؖ`bcY}$zj" 6NTU!6?M(hJ/HQE_7?*: \ dS$~Sئxs! a<>RPb3p1hTH!' RhF}K#tvv %ߘ4ޤE{%齬E$t ?nie{A*&[+r,U!y@RqTlFP̱DdŌO4|hk%aTsjG~KD]'6W+ 9CH +f>ՠ 0@Ts_uyWBLUڐj Aԙ )%kr%XjvKƅag.g⌱BS gW$%^%D3O:̱[+֮Esv&@D(tOTs zE'zU IgB %>UK0tl`U0ڎ@3N I=n!iӓ\\YgNH U=&Ξ)+鐁iH{4-> ǽJiݹQKih1Aka;44щR|x;$&o^x wHޅ杹U+Ywe$!|z[pJ]P<iJ B.nX CX7Lclw dItQ<~cŶ-65O׋[ȓ/O9X33Klj}jVljk(m6(-t=ɘH@"b-y6S.ƴ]YܐrI: "vj_INkqp@IJ\qj18.ݲO2f]=hH|>[&AM;v1Q}%ԕ8/t*Q_xgW]]C²o~⢑wff(خpbD J l@Cd?csQs];W9c5i@kXZ1er%Qk ħj{B LIWνmW̿G"O+'`D-{&]N D1芶s{NDAMn_M& Lꢈ!*aB8VWWvW] :sk(|uҔ$|EU`%L@W^lt@/8st֚=]70?bI79\(,Zu{ܝ"x D|A0 kH{G_~58w cg[Q ",rum&::<©fX)uM<)e軬_L'\Ii?v! OOJR 4bWݐ[f8еZE_+{*  `$c*3˻,+fݧ`,0h`]]ZlfSIT&,M#L`@2X~eD%f&E`G^F CԘ뽩oOd־ Pl88Cu8E{v YZSd.\z%ܕI𪃼%SbcNJ,ItDpv BF=c,^T[0rx탳P7NG{6edLFof+p3ÕѾf1YQ.ٻ6$rw8{E/C~Y:˒BRNW=!šQabWꮪAvʾ) 7Rj@G"bM]RR#أ ,TpUT8$$B3B!) 9ROF;p\_ ~YAb4q=Q ǿ|w`9((o7_`V-9%zY Y<>g~wA"㫲 O rȆ=Tj͹LSo>W\JA7ߔww_x@NQ ͧX!/fwp{ s;4_dU}GxM  {ʻ1YR8jh ?`i fFL- ^;i7эWnUVi?}[WՍ|DRUߚC^V%,Y@5;l2XZ8!N$Sb wT}&CCpcKJ `7GJ5z*,YJ1'.x)S tGʕރu4h#J%scqN9)d8viԆY9dbvz4}``S񲒣"on|C,`-Hz8s׋ś/͵y7͓Bo}9̉_ofL31?{l0σcS Kel)OeV3 )g1=OkE3xRpgR:)AP` +k0$U醁 `Jfov/ɫ'q‚Z{2D I  qU'J F<$OV"P s0 \S)HXځOi;a$AT.C`Xf5JlXYBB)l@F;U"F2jRCUOB).?3Vs0\$$)Ĺ|2h`+-% I&46ch_U&b {<"M\`O +9fBg,g hf0Kc88lf%Q54)I`-d׷kr2׫!EՐ2AWCg2xBGr!6\nUz$1e)UCMip!xM^/&/я= <(HeMFOM^r8CHKMڢx I@!ͥχIA=E{hQ3EM 9b2$EIRq{i&{i&LPo4DtJ >q*'14ϡ"W2: C.h͢RW^;S<0eZȉ¬wsKIM  "&BnJ <)!6(&JUq`(*pSofXB9z$SHܜ#-9G 92^^c}w1Or7Fp ڜD($b!snӖM>t pGhDV*qXԈP0(\s*n(Øc M6E[cZ )2B F-3G)ܷ(ONYJI'#j5lEl z#w n4PApWb!5!N_FIhOm2fD!҅FXt;t)6 YЁjLv:eQ9>hzC]ĔcfK, !_R:W"i,[GZbMy̑+Ui!# D6Ģ]JKt:;nGW[cUX(tBQ6@@2)#Wo؞j-^SB 5[GU` p\,5 Y؞d1S `TKI[#.ƊR1n:x*EfTqaGFY)% X )9Xߣ:u}sK<V~pqyzg*:c<*zqo7wT ۻOncF76&6, T58[ U=i%XGwUr;wz;yw1ct]«ax bI~?tRpz>z?3NVk P3u*&xZ̓q\]ib1) dڹöʃ.oU59uK@iV Z R)p2>H|x͍?+ %*hP : W6-{&Gbt.i*Bۯ_oo~ZmUU ~\=WXWTg1t xط!-V "]!!CPHf·kap[~r<b[7W+xٯ[Udf\6+VvvvvY|yJhs"cl  9 g>a*>vo-T٘N (:k>va#\"=R4C6x2)Khx&1eZfcgyK6\f/+I`}H %x׷r?iog4BL!?ڃiW /V7뷍xM>eE@zZ~p׋4tSNP^UD\UezH2ZƬt41.ipS]cyvW|9.=Lע&F49u`|;@Cْ,e^3\3`=zt^&9 _3smc|ھk|ebxJM,gRʦz)0&)B8bSY&ȓY,O$?0k KJ@Z[iP tw Ə A3"Ft}Zыxt EW}lkPpMzy IbXNJ|>+K!-BPR+ι H$xO0Sgܳprߵ^%8n8G]*)(WUW/@Q ea .~r;QLXcɻ8 =n>j9 *x7aC$ n/9j5XVQhE.hp_^>I;qjel3 8p`̇ As]>V1+,H){u>̮o3R7K{_g{?O$5ʗo*ê />FzrX@%X X<=ڭ23xuP s_uY6W1ȎiY08gBP0cV,ӋDΚ=^D^9QVE3`%G=aK]\›` jE,^ Hms9φ\ oMh "{!}G ')7)cu=&4pvKM'=JGcut(wJ~:{qxfY?w Fu"o@<~k ;~r5*f햜*Jt05"^N1uv`VVrw`z¶Gr}r D[)Bݷ?'b\%ZPVΒqBz|5!RTg!,&ESjV$8D<)\>< VrEs*xgޠefi,]J$dH$d'ĥDBk%XM"FJvcb!TUJFUFȃ}iYs_ EKl%(R^IDiޗrްg_tTtOe7~0%<&ä Y6FQ661Wk"Rz8ēg7OYbXr7μ'rz318A_vVLFP,B L,4'a kn(]U x,"܆ەޫzɌǃwwp9 ;I(JNo2^R߃`"O AiRҁ{*R,lPT̋.ZI8tEJ&@`aAXj*, EJ`uAmjv%ECpfdb}m 8tG(0oJ+BQ:hE(Y&E3QM[P}?\Ɓr Snc5pDEsR@m6Ny89Y#Va9<{*H/b&8ISP3!Cyezk58U\C0yX ޫ3fq4e;yoSXWpR{!\|W|OĬ к']=?7O^œҙb\mw; F9o&V=}($`AJK XaU)RB4\i`;p7#4od1.oA/\ijxU^yvX1X.ڹ[$-!yQ3Em؋EqcP3䧇)M}1h/+&ؐfл-YE>’ |_2HJRn$q֓rɎd73k/8#,r磋Vb]) !i*c&l]3 K3G#:p+ud'*r߬Fn [d [Qk#z{3YfB 1t[k H١ D;qicIƧ JI:x\0/typh/XΦN%Ş1nmA M*IPT];*p<ȁh^nQ= ¸aXJ}ހɩf-.K5e #e/GEiڰ袕*!g/тVXݟ-Ag|ЖRg:ڳ<9~IH\TN>$^Zh; {?(ׅI&no7iDfN;0 iаh!)V"br9qf*kuujv1훵ޫ;uoރ!ѡ,˯潺&f=Ȭ%e_rJZz3ccЛp`BACuG".2c ȀЛD%jGoY0GpamQQGJ';pKKi:_,+f]>7d@QBWfAqK#.=guF`<}.Swt fV|zumtziH0zWװ8ЖBA aiVׁr!ކ8^~PeY] ?q㙞&Z2dʊO3u^TvA+zr%aROBI߆@!|r%=NC7,ʳhk$k9MCT`|%W/q:ڏ0(;W~79|L=ILߍjM#8j_6Sm© j.~]gxB o|adf3(:FZx=A(e36W؜awftl d`%m8K|'fKrgP4s.09g%rҧ3A1c)/Bc!$嬝=xJ#&1 *&4CY4*C30MtTCSҜS@sF>h45CSeag#˄漵h2tB2.˭(Ǹ82k2Zơdxqd^~ȯ|˔Bⲻރcb.D_4[3˄q[ߜiz@ Ss(V=[(J|9h }?70xx߮RlzΪ,ebsYޫO俍Ct oxH[_v 栓/9:ɪ ?AjɈ8݉%b`RβJ:V /lg<$͵KPmd :a箚 -[!4E`NSg1.)pѲp1)EeR$ 9rR+gtc3I}D@_ u!9r wG(E JQ+5pG~:ER:94,]EڑitXh+GpG @rB@; Ri@ц \"1K-ՒH7(b GJЌ}+@&+XZWYs*#yx0iq;y_=8h\"f?ߎ~?6 [Yu>eXH;Sn*oܯQ2/sQy\Ɠ^^ucgg't4tWWZ9]{EX2yG8R+x,Eᤚr+|Pțv#R.GӘhD,p3~ ଢl )Wz#o)C`& IJ ȚCJ90 t>: JEl 4@a;ɦH˜tx[̇ujZ%փI]"1.0C4ZdV88'.:Lzln O@=D^O2q % Ioh'J3cq`D3 )I˩ nmeɷWVP7W.dQ`N{I 4"2nMha u&$нNlU}V?쪐ākSقS|(/34/$}R&N)h(j(@^{oleAEoP ڠp[gGuBh-S!NHZ` Xr` ¬_ek׋溳xr`ZƂ˧{vэ_mQ{ۛsvFi^\E F*)W7!}K!~$+lIa=ExvG 䓺G8t2xs{)堉@q=Gڠu\q{}m厕00֞pF4'o7QM$ XU:߯rr]1Bͺtb 9BbcXKH1AEUT"%Z#0CڪDMtmF[@ 1wzsw P=a|S6x>NM{@woڍV:QbF' ,j.ƓU5Ɠ@A8)`RN5*)mT=\cAoM7&BdнnJ$ߘS}OtSM`aPv8V;xKU9ʉS\~Y-/w:() JY/\CйסFCSuh rXGt.DÔAMT(DHr#EmMQte(դ!^q W+&x̥ŪǽW%*"z";5WwEōaT`#Z7ˏH~ICձ/<%}Üs 9 pNZ+-p;HT^r֥#&ϩ \D@TA!7O84b̶-3pZ"-TU4/7 %Q,H3[ֲ0k^?mQWW/rBg1p҈ۤdFKvg'Qb!%;:%PZu_jB(!%wEf /}LHYW.jw Rݑ:VmߡŊwW⠬4ߍU/-R x)t?_|#c+^Jir\/V*&C?Fvyf)YڸGy5/49ش䵭 Jw{bbϟfܣy~ |Ȏ?zKğ0ס(|'5##:2֏Y7[;(3NV$t5cR5atq"mf#}\W@7 ђA-)Ӆ ܥ~tc.%/,6C."Q6;:YXP#:G7Tv1XE 6|=s-rn)J:%G5401O @v@?ŧi^`KϮQ]`ibh$k?GxU#~jtuNq 2f۬we9\]NC"B}?j(FҖI烓Wĭ0WyGߑG^лnW %{ٷ@;TYDH쐔5ߩUsW7P>Ϻ3\6)-0b``*ȿ/Tg;aმ2?ņ_:)djek)m+* *mA#:W>ͪŋVr1i3qyYem'r6LqxTB}v?:MͬVu=4mA[m QtwkHr/WA#|Hq,rBy1G7)T*0Q&*$EL%L3I`ID텕Kee@W[zB$/*mOᴸ1/5ӄo,R7mP |Tw})H.Ԫ6F֠@w|5f'8m94FCv K{=˶_f77鏦Qqh#ٙ;梕LSz{Y#E7&rEB?ғ%}n"Ѽ`#lioA G\O-7e䎘*o .TymiЉI;Qe]/;̗)An2bi_=>?΃O \|ex:8/{]Vry?4*Xpw%v^(fuYTB ݵ6\C0OcΏeCG$hd@@*(G5].VܚcgP쑳cgʁք]Nݱ"ެ 9cp g;F9]YǝvE>Ap4u]qeZ/S~ [:JRT+)2=AP3#+AkyK I-_\ dᨈ/bOw~zkN?3:~{_|?7ܿdzAӅoFϪ}y:E7xV]X@qae𻧟t4tӛ_iz|SЂ+k5QI DY2yJrɈ_|Ov4@0Fu|pU 'mOy2>9ZCrƩfOԐSJZ4r>>L_@F•2@T&Q(M"DY҂JsQNPIVׂ/xia7M,YH 8SRP5)rEDfNk% R0)lQr U [iR\.1֗*fBc\:c(ZLhh)}RC?7\ h,~qD/ ˸g' :G j4* s)bžѓWrU^))vZJGϩ 8mI½MS7e8{5 i{Bܴ\ìlbi %7n'p^]Ϗ>|~{s36) N M-GWruh/^kcn$o2-hHp#t/QȖ)erNُ@4'v/TnFM> f⼏,^KZQJjɡ=cyԄ MLرD-Ȃ72pvhDŽI Asҳ5+N57Q腛.|͜5d<.QI^n2A@DTdw7/+uxaw~^c3%haӎnJ%5f6h#%bqDk zI9\{p I0K!(xUy'Ȏ%cMn>HS{e׽q[]d~l֭CۨKtijGGgA0z#ugQk![īxdv# ċXʌ* =ro>1Cp%lIDovћe,(`b,o522e֊R_,FtQh6rL^NDAAg%504bO^y1FJ2sR0-3ѥXLF{F  @YVjP,SDM 54.8NmL'svR íeEI9.9/{` | Rmoum;f 3 8_۶ndz~6 IJ fDSW{u7l"%\3۫6u t@z]j 2BKFO^Z8Dŭ78BDZIR+أy<0|2A7qg>|aD6$^|Rx2׿ǔǓ+twyHNxWMR%pfHQb5UuuUuujc]>BY~|"^ s2"pONyg >z%$|t(n~ -r8tGJqX@LvNuE*U|kVćs *<9/jɷ;2S.ͻ-6V'wCG1Hȗ0.`dMbsndԘs l%#0`p: ˯:FR=|iey&ke4ʸ=;d~% bdvǓ_>L\ Nv㙰K~o?d3|o&-џ8! zj<0g ڪyx>jFwUCW)~C}>M긬YW≐*qNf\W<% V:ŵL*:A(-qZzȎrR$Hc0u.?ixd=V<Δ8J)BA\bKb"cLWY3Ϣ?{n:Bpø=^-^gxuVU=\m4H7W QV&9.5*:QA")݌W䏫Ib܄|p:EW'Qዳ9߱ 4~tBi\'ːVNSJ6,۔w!H,V+'(<)X:31I4FVaҹJc4h${KӵF 3tC7tH0L/LzHCr$`̈6bU!OGl̍my8IzBiszjIU. OU*F؂:x=^j)bюw<viu=E9:aN >/Ds%r1w8劭);5)o5!&*Es:HeHMU1xvR29) g+6@gr3-v} nJRC9{y+MkRd%^#pt5tpD"D%tL02V#c}4@cmO|jnq6G#^M?yk*JscodtNˌcw@qLg'D%sumW4ĿiQrgέxODd`'oͬF8ZҎXI;;xM_-5%y:@J#s]\+Dvyt.r8g3TmZ&I_)e(ˇerQ0R,r W14 vB N F1E<Th4fj=+hk(=y: 99<}B8.eJANZ4:E NF8M:‡Z*| w- MYV91,ûePYk^K4>g {8ٞ|v8vT?:[]gK_˛Gsfᇡ5\o:*Ԩ:s=ӻrIƚPNW濿 YN`ҜYcAIDwf}^p bm5Ǒ{`}'Ͻ0Z}Os;Eܑ/ 3.e$&E~NjWW :w7t\57 Fyy@ea;x" FL'\@A5HdA*ׯ.].^}ew/j<:TG\/%t/a k0{txeܧ8?Wtڗ z+>uz8JMC>y)9b7k>+_w<}Fh8=o{alWlg86 |`鬒zWv$%\Ɲw{uy|ogN A}b<9p(*SSjA25L"~NU>Q*-GKk 3L`P`aGdZ#+lOm@zG|{ɢBg<V 3mEc3N>G?jA8U) R5NcwA!mUFٟº Rr^ii z4ejVA9# wTn᰻;he^R~ :FBD%★ժ9=¿urۮT?Z`xa TeJbheHd!qtV9BIA'$<ACm4g:Xc,#,"T&@˃%1Ϡ0cCJ`t&*BXesr eL+YnVj][No=ɽ #DYzDlڀmMAN%K4X ѯ̣³mIIJ&J4zjyi85Ո/mh{ǭZPRd zрvդdif6_& 7dE2&{sU2f]9>j3^<wy8ͧr)lzm,y8lĔJLg["tliE7bJ~L9D0I#QISKiHs+\-\y`lOi6sl%dwsEjuGp-rIX1TQ}/ذbqw ?? l;T_>ŁK M$[=)$ ޫ|G~_qfvn]0I*6{j.._6|nWÆ[61zd6_>ǐwr:\l[';̢p~ٛ) F~͟#|kem't}ExiR'>fi1okr$|n|{EbvlZu\S'Kk:_snч>dYt8ǹ܎̓vMJ jkknFeJOgOB Ty0&"$n`΍'24n4@78q$'x5_X=n4??YyYe_Y@===<?^r0~z8~r8躟a\鬟&_v7"l8ǙOS[^v`S]av'ͧOzb:D}'Bv4u3lT<|wwnOŅ5tuQg7ɛpXj&w6P>d+C*kLL|gtB)Z{fWRPV W/R3$=], Qpl{4wɛnύhE73?~O}3c:Ȯ:u}-m@0?Ķ/!e\ 2.GgG$KR3`y#g5" M+f1|r<wEDy}Ba= ʎA#82A #)=˨4<|-_'K]cs]r!ߙC9$(ǏcZ=i&s7^!Q NB؁A8 )![ܿ;TԕA[0nn] ~,Heͻ5vgII.]n;e K迷1`洫߄LEBĈl7NXZ(je]IlQ,(טHe*b1!JH 'Imb}!Z҄6V|jJH%3S3NLd Z@Hť9|A5}=,8Rt "]{>`d85J#n}>_"PX"K寃b`XƌDر*IM, f-; q-F[KD!f/?_xn[?ԆP$5QЭP-Hb@@l $640m r EY F3k-w+t 44n@3 ATN̬( Z,$gvA!<\"؛3dGFB Izw<|&A>4K=(W3Ps7;drHO-ݸF?q2Wo|,ZG)3Ko_A)!J@Wo\Yrvݽqr;cX}ч/fg")9oSKDHoXB3^h{wuNEvx`*_y;^* p;Wc QC=>R^o.Ca49aM^i{Zix+~󞈃QLǂ&ƜV Tsɱd1)1Wql%r"Ɖ՘806δ]Ћ=1Bj #1q6=v:Tg:ؓXQb)w?F` JQ B{FYo60V}` t^9Њ +uT 9*QHJ#cYDxsdX&fKJXڻJJ|/'|AhA=z1k9AuP[6XF$|2{8 ݖޚ+g\q*/C;)ᢹR&OR&Yƕn2*eQ*ĝ&83ۣdD:/ vCް/nc$.}K͂tOoAd!j* :bY&%>;(VjS CO,SQzߩl%F⤇Vk\,Bfp+ڷk\.ۂ|"$h ұJ @K^0E.:ph弲RȱE)\YS9 /w_RZg"+3chI19)N58aRc5Ma G9,Ǧ[=?Ne.Usn Bc;!T`*q4!2 &/QQ ӧM\G 2Qz}уoыָOUT!RZ qΐ# [&>*A4ݿ>y149J&s:a1 C 882QY ;\*\2-C!:^x<]Pm~q ɿDE3g cHXTr|*{g . nZ4A@ 9KD}l|nO<SrVumO5w&s1wk5#ءrSس|Z)R u5qfhwۚJSBagӷֱH4Zu9n֊1kf˳N(' hn}pZ!;+~QxQ| Ѫs ?x~vGwˌϾ gM/ޮlS'ޅ){wt9@4rp"4KOXڝL5QJ"TbrNd5QIՔ-i.Ō<^Vz7WCc,shut@N9~?,7W/}Nf?F˼m aKrbzWJD6XrY`e8 /ˑFG!ΐh0m.Ŗ܅Rr;3!V`|8C[Ua b[8j}T:]x+ZҙRR_@)nH7*^ kX" Ii̛ۤ,_]TgCL=jI(%Ca-'_( 'ؿKw{Qcשּv:$47']ja-kx0~:Ko Hʗ oؐôee It /ӞQSHϧ(A;H;,C;g!/uJq:/ɧQ)-_f2ήcل+ncbDj(PZ[HM KΈ+A&P@9"\87C?ROKL. }E[ىrr+NhxSDo5;z44|2p^߯^lUfy6l~@/I~q^l[;/>-*ͧ4e0xfis|>.SPeҮϲWOIO&Jb5+No\DwE\Zne1}n<ͅ3nS9j*$22% C8:YgO9ssIHÓRP(Ps~9Er0jbLg5M-"9dNK/vїa2u1 TͪgNeUwLoU`w&o7*˺tMwTBǺu-0"ܢԝ;geCw探/w}.`66$nxST^Y?~/kF݃S1iZP9: )C!(ًc'އ낖 Mx!c-qOBb^u\)is\ΠDjԻ-JYxnҳʎg6mVmRevhtn?FZMT<_Ӭqz~;=7hF&I(mYVt7R~ve(>/ZWlב~=Oy'/E'NUYۛfyAQPz YѳH2!XL$UW/SUN+W\'$(& RNr0u`8ƾsv:2Obrd{vV&)Ŏ[tj N!/=;ܳGy\!'=z!}ч/d0pX;O8DHߎos} $"7Cò/tډ>I7j >F]~`;9 C>ǘJ?WNmKz01bK[ȯ ޽U4?M#Y`)g7 \>L% Tg,C|z'!OEeH1=ʼn9@g*_zUK+P] –ic[g1t=L-4P|do`\j > H"Qh;497lٻ6$ ݯp>.. 6˗,~ZȒb{߯zHQ×XcCpWU(l2Ҍ%P2Fb`eG?C]"#^ս2.q2fb \)'>goe\5wJ/jMFq[i>d [E߈*4%Gu': C$:fx~q% ķ|T}_~cQB>x>!>{s ShٳQ*]n-j'Wwz1tN+T`uDDWI [ecpJU8!ܶGoz4دCv:IJJlҪRRސE6S2nuZ1 .C*_IE\1oup_ӘK-9rëuف>(qrի)yUt ntx]=sHq cI 't[>Ό!/BEnIIs8Չ_]B2\MCGNKq!Y΍i< UU8> W*nVw=^$ *bDr\ p0~j:|hɞ+~|1,H jQ2]!@FfW7#:m1̿.m50EJɛGijDQTa',ybA}'9ՆoD׈,&\ݟTî"J."J.kipI$Ww>%2E'I0FJC9|~2A5nB^ @g\PKnVfY]΃,FkҨ:LZ- ȮEv}.st}򬬡&%Y&& 2Ŏ0@|q0v]`T i\5wX?=|o'58;&Bp4n*#n~N 2regu\ԥtDΐ]nhr]/nYm\:Ϯ8'ih>mg:l<˞cRYTK .@+2ܞNrMopȶ3M-|3a\<`5t4'Jg8\5O.kPQT1aPRM!8f!N{^kabγ49#O/4RI}XIBttj"  eX>yqހ`#`(-~N"Zק("5@).<;qI@9~.g$yEDn~esJYչa6|$S`Kn}uT]\4uWzp~# dT MhcΙ7zߛ'i-,tPX Ș\cB% z?õ_ō\2lC5vR/78yJKՔg9eѝ7kt J:uk$f%xQ[?ɕ9BNBŞuA4:Br_cnQq:p.S`&ɢqj0Jea: FKc=2֓FǻY̶5k4p[ӻeZR - _uQ ~i|8:@q gMsJޗn*0K>T@svֈhIrpCQU7oV)lʓO6لy_'L2/ ɋ ׾(){vo?˛%1Ƌb >S E,Nu Ǫv bAU> Gբ6\2ϭP&v|ĺ62C .q',d=vWEyc@S>oq`mYJ,xxxxdcIZjf6g%QtC *i`eKtS ,H0z6FL*%V#T:eۙ nCwiV̦swtW`JZt0Scߝ풑R;qdSLTJ*-lW9 \ IH7:wQ#3މӖϩIE\qT(gr72Ԇv1V\Hl7}(%nqp`mg%IԛzE/n-]7ƂS/r+6V[:\P &}pm<$q,$Xp /<HXktxIuaMh(j -~ȅQ&K%.IT(DTIcn<Ѣl9ݲM`pNZ%J0Hv[m&/vQK@\F{=J:Q.űdǢ-݅@:]j&~Le]u~2`GP suNO7ob;FYʥ&N#pǴ=lxVI{&~_-뀺!ˉk+-U8@q9)G : - \=׷Kltx7RvMWo8.v㬺Tv0 *Ο̕_cS8=y_ͮgnec δ'ev ^TǡJ^xw+nIp kӃsq;=%(9[3;y\\'`ɅP:d+^zep({A}#1-Wt/[n[tL$0ʱx/,vP5[Lv7y%M $aw'52>2E%B0*CTȺp 2ܽa9 XhЫȶI]*ΓQoUQ>ýv~_|"u>#bF`ijWoRh|:R^$†'^ whP֤Yܠ!%]J;pjLF!'uLKL :љ`7TƄ >[4שTKэE֬As=˦}TQ;?ͮ%:oGs;C],ĩǟu4ˁO1|we]xv_nQ_:=O1+վļc(9WdhkW^|C5X9W=IW.]dJ62ԟϒnFSn]1(cntSNuhvCBr$SZǹ4'uŠDtF=.7?b0yڭ E`c!ɹM6_(`Q&$BpJS*r̡h5a+̀rћᮡ5zsSK(iB C|gM.Έm4͑$tGx.l-1bL^F䠸> HKYn}P#V!kǺ"sZՃ޶d>c~w<-W UBQXFEo |(_d 9+CJsZ& o.%܌'=aRNAG䬣Q f| %}0TE||˯K1Cla;e\SfhA,V?s*[8Kd֊ mV}ŵ Xĵ'[u觧|KcB,ZX xEP  ޯ A&`R\%ץ %LR4g*#(gpCp!36é='Ӈ:ոap3c9v|%4ZرU( iC>nWK h^A,YcW Ւ0kiV!T:}%.ft]{̻75H^ύ {큕MV6{1JȜ,G"6(."T`Hۨ,%ѳȜrk]{݆|8s#r8nI\9mGv0ˆ.g\Gp'11$,>Jc'*BY"D}1NsYI܁jz<ԵĿwӠKGVLԚ JJKT 7wmOH.$ӠשF+n'?ɼ%'9&wȞP|_G6Ǎ2/VcUsuuV6| $zfU58# 5$5udy,KCFg;X|V.O=>] $EDՅ_6CSQ,7_ۢXk­SveOol$X_ΫUE=XNF DChbI kQ|2mc{^f7sB^zv(T=8 KI{iH NS"bKOW_t"oWۤ(6SA}als l?B}!&w:vh:OnP)8:O.P :KNF̄DOd) NL0iaLns+s`X u1V3gn fD 9Q Z 5i.J)<-U EFԍF %RP()Dp-'3N Kƈ6d',q&=Ol@{bJn⎏KŎs@qj-5Ėb%p<D_<,ir|xRB * 9c*%N勺 :s`_' Ϭm쌱ilG֒ko`o=Vd/Ut4Z`x>LHpӣ+YEhT=[g0^M;k ؓR)d^*|Zn^ {ڢcfe)QNjEA113&nP8 Cgh J002e<,V2CPYRHH2$ks!mw ~ν9EN.&%>zQ'tg0*DCVfXD D*Ň mJ'$^_a! ih1J(Q(Q`xAOum~ wnynj!D~~RnfHµ1 JsVLn5;PXZC'Ӱm6`p[t¥O=h @iɂjkڽzm&HwÏ˧Gb}hGx㩬c +w校BjO# 0'y;z?іzHSwäLDPTQPQ pFN3<(8|0Af1aAQz̔A1Q3&:A* l3mAV*K͔2 YIF崰haL[`V;؎#utEj %(]sGK7R7F؛XB#u9x5H~ɤ4l]VЕ\oԕJeuєfzbMuP*\^SZr:!(~*3t_'5I/z]>UY.xӏjBs'=<ä*Yg,N'֐kCS&IW 3_ h|~҄~vJp0O=wRXr+^7h9hr5RjĶZVά:!@+ OT s)R#2{RXPlnm%B<1Y&tRED.KWXJ䙆4HYӓ)S ̈Dqoa*$ixHwCjlWbD#΂o&0_aBԗ [1{ETd#Ga\/l6&_NX.0۟L/՜%Z3vБ?f]qF~G`JY_o~uɆoN`(x<&󛻱>XX߂P o6" =9ʚ [?9Z zFEP{S3e{zsC DێT=iA4&L*^ Y%4UIT,m/G_,J}du>gӷIVkW`$νννΫ۶m"q9IZ$ rrb@MR)\ W,S= vqy?H? An-jٕou__>>6>RŒFR܀ %!t]UXs <,%.ڸM`7ɜ*Z:-I+/猻gT>J ~H4-2- $3F9=)  g32#.U9aYQn pʤ"C8KEL)EL|E˴9R24YH2#ZClŏ!`JNV\e{\~R:D{MvYO䷞99+~NN]$jKL_ȷmLƖ%%>•Z7Xg#Ңʜ& cHw+wu{}z?l6?,}zlC?}}3Y)8'vg0TSs@op(.W-?>\c RelKʮRb7+T֞\r-SJ-njj% 4y֞Z$-:| nt nc 8c#-jrzoU*׵!Vz1eyğ]f%lq*@Gŵ1'Vs.R0;JVx?/id?yq |A1$.vn2߽A؏esfI=Lf =1+-:oE i9T "25yas4S%.SE$8X<;{=@M Xiι.=j?EP9k/c-K.2Y@ޣ,b^Y_Dv{p  AYAe?2'C?td@9KPѠ(8'UIP s:PKp>{ r;kVL9PFM ɤՌsG/f;)-)j{xFPJmH%KGy{o. p3u7d|u3X}O>[~>+k]v ~Zz;?(TD"&Edwe< }.^'?rT^\d!3E:H&DBMRLEg{4/wJr"{6:[a Z[)nˬ[!q.1U8q`2{գ@K~:!P!PRc"A>lLoO^0o]**J^{bYfAq^USiS1D9NٞjnguY-1G 1á=bQ9b g1F(z4CL5d:U9"}~G,7Ch lfp}ݫ𵍒8Y؎mM0oo7yi3|e~ n |Ae=QcG,حS'_;WJN@c۪GW`/ ~5 T=9n}'ǟ5zY*s_UF)V Y)jWcXal*hw'nZ_bq6 3h5>*& ePpt.x /:6qW}Q=3j{ U-q1ؘՒsɓ4qT̑0EjQYæ #i+=RD?ߣ)m)_X>dlgSJ~Atim_k' u غZ^=vכ;~lǿpV?\WK]IP0%5)0:u.\]]$9N*2S>i56zsGZA ;%C/*`^ JrZq&V [zcWÖ'r:>QU Q:'[GIʌHzF/o!YyA90Rw2G~D SUOY3L ԥ:i..p~huE؏? (vu Wc}9$H-"p9M`G_22g>0 A=Ո>G$AN/?OHH%\ }z { HP u8\ VCvP J*sm7T]5'7|,ʯWyBP ;5{il 5f x_0 ]_f4'45={toCr59ݫ~%wa v״W =55 A67۾h*ˏhHA'J GR\xeÎ(;M?Zj%GJݹRk*Ԉ7V`S9]_gp(Gxg}.wWɄAm߽yȅȒ%ۨG]tw§DI ]b912J \wh%@y!KRHxc4NmUW[h,hnӦH (mP Z$&/FȲpbx82y<I^RSqЂɶnc@qo+h$,'&0' EY8n^!-ܘj~:r'jiT`%H\L$!-u>g!;)ĸY% TS!~ TI ]<Re:qY$TAIgq1ͬi3W"/S kh*H*W)# Y.: dc|i(Me{qWl4T|_irv=#f y*n.ڙЧOT9TW{R-v V!ZcKF 6OE)0"!n~@0n?{z-5(zc&nkSwB E7UM4)OU WJB9˝B9F AwO'$ӽݫ3czHj,A<~}{JVrH4Q4%9HFBAc%F*Ԉ SKkhޥ`f37eFJ!MY j:9spS!R9R.rWXXf xf@Pet恡~NJԔtS?1JɅ m-9E-B$N40j  ǖ;=BKd8_ʡ~Ԣ#:epb+zZxAz?cK(,ٻ8n$W} p=K ^p8bCdaMv,;)EgRϫf$OOw)vG4MDϘNnJKQǞ9+rRscTC@#Lix ZEə : ZjGR5- NUޑ(Zt [jҚƳlx%UVv2Ʌ cAW;C4f萬'3/5y6J+66Q :$H5y]u-_m4;=E |_VMۉ9rkh}wO:N׋N%9=%2kZ^Lxf{J'9S]TܖoP 6,=T3 1a7Vi!swd9қS0"HE;TiwM&͖t?%pcg CR۔:U^ s(R2^/_mi8XJyv'^SĮ',Kֆf 3s:W*f ETvXηZ5 yo1aj-L: CLSDQ f C~M{1aȅYdwzjFxOlB+ªsISY$R\[mG,utxF=tޢ x@$%*Ti(1/c4G& !3; NG!LҊIzݖ.}T,U3;a 炎u`!DOaJQ6NJ&JVJlHRÑr׍NmΏ[ 7͏IC=P|9{ uM)(3ߖKnC󶏊Ȯ`BB K;56>:8PG2]49&dߡV|R%Ͱ8.Ŝr*a\+~RZ+cx+_bi}J5_ 4[59w6J0|}Y@oCN ]dL}5s*騞aoYy$'2O+vi,w}^'+osķ==0jofHt@'5N @_L`F 0gbhT9R'F6.kz={ pRɳ7;%`iMr`W`)ŀvC 5P̹ÞJg^Ϙ\PC$j*^tŗ`kV#8WDZu[&'3(ऄ>OlPKt>`cTMh> %n #N չ'^7L@QFzM:=%K&jB4\q[irL:/-2k+;-|ҁ^6*uU ,x)ȉWŀ{%z3θt=D'ݨ z՞9ư`] %=ț2u4FXFw i׎g(RYowKSJ.ɷ(wXjH;v 16%6X_'f*lwF Jf~`J8 ]f]7rJt^\G΂׏,j V7MQjg}뇊6+JwzXnQG(&oeG I lZ:u9gQ<`~Aa?[#ӥi{C[|QACe)(, =Կf+yf234FL[bU˿3Էqo_mj̼Aږr_4qo\s6ņMع5D8Ne9Smft՗x{=2X1]%ӉE7).0 n.Qٹmb9U\͹ƿ]=;<ҏt]-ޑql~{Sǧ?+xOJqufO3}~ߞ٫7+7 5 S^1mڂ3#sgmitOi4?.j뮹 T]UUj5D>ha̫EogNqЍU3ugS90lZ5F+-0׍r*z"zV:J)6An [P~v]hP+'/|'o&\[vF(Mmtݳ8&U_dN-Qeh3A<-zޗ i)aV؝eݏ@4jZ%`_lRADcy,H@&00LG V8<; ܬ;>m߼ KF*{6W7 ޚ?͟w};@Gw OY|:*m/}BeB&E}hlZ^uiZ)njcphvALu'7S~?n)Hg*#\px2{svs [~ psGkۆ)k?~t]YrVͣ~x{;H)h_Uz_fCL#3r{̂O7bljՠ#- šJ^_!̹ήlR+I>6˃͛j̉T3R5o@5V)7YWv슫ߝ7={U#uo{yVPo8`|oHkPJ8/~TV8M;qL f.R&L)?E| j@]|M"n|_nm%(_Ѕ?Oy@YSL1L2LR3w6 بx`x5͡Ҽ"z^d/u6@ݑyڋ&(MlpfB)<vU˅T Zm?]PPE'\EOH[)ڨ*z 31VUXE-wL0'FyMZ_61O)x?i{˜[R鲃̜^ؔ3H͗/ujBSOi&'Wxf+#DEGCAo4͢3K}n+ͯJ۝^z;?~}`quZJ h*4m=]'~*sH󜃙 }L']Jnz2ˋ.g:Wl_T-)i T6xhp721䋘P2JerdpnGrW217/8[?ͣ 7UM"J`LV.o&>{$jm#lcBM yɃpK%9=g&v4JSx\V*_Ei#_#E< a珷={??>|y7rM?rgh(厜_h9 h%R^!MHm& - ^4!.!oHIח8._+jҐKvwJr^a&9I|ƉS_gS_'@V)&rc.m_gǟ dU%!p,}2lk[׼ŵ,^J GCN4瘏-2-_8zҩ` F-9SzԒ-ͻ|Yd3YHccNFCrea܊z[Q@;33bbyb"o^QLd]K1'ŕWÍDYT<Å򖗿25V=HE&_WK_?]M_| =8aKxg Nl Nb QɀF;ă MauP8^_N$6j9(8ZD8XDX8$~j o1p)>S/i2dYy#튿aW!/XvRl~K>Y8 az,>˼O2ӱ&u<3N;k k t#uEЩ5^ ղ& ]Eu0ix(iWfMCIfU?w?:s(1XkX#$sAT#.e@D,8fL./@W{tʉӣ"g^ zsؒn,W 7ȹ+xyLYh8LچZaCj#TGZE5:vZp=˂.hmTVahN}(#zH/a LaJI9jȝEEj+QD佰6Ն @^Ml,7)@>eop1Cg"=zXb̹VD}3xRIDyoe65rFiC$ֶfUvkՊ{9u#^Q*٦mmρTBi?qDM\RD1JVof UEV]B(7-!2'HؠInSjإy"e8ZlsKǽLLcxv6!#Eܒ`8 ;d! /tD,b|tkw_pw""\(F),y)*9Nq\TUN@:}zЮ2ݲ.^SvF%25b1+tōF=%,HdAM˔ZMLyEy)߼%LLIy㩩=iP 'l +Nc '2?x 2p8Ǔ%u4(vcMAvAv"ҍAPdE*Gnq T壕AbZ8";t? "w{v) bYbD*IY!pfD!ؖ3c+.O vtfGՙV\;v_6ZិzwhEBVy.n mb^|XLM&g!˷7/\ &WӰ,.+S%Mo/feDbpswi6/<{P쭃_Y_s-4c5dcx;}1Wϯyq"V}H)?Uc:֢VF:q|[6]t*tv)feG=pTsP5#l0M[\6\AOJJjv+b fɷƼ&nӬHm@X [T;Y5!pԏ > )7c38z8SD(Qɡ(\$rԥw<C~*_AV˖GcX(`8˕LKaX zE1-)c(۔iQҢN[:'8$sf.9" AW0) /@YipՀ˶5VXx^V0 >4 hk0Oauh,j P+=4;VQ[eBiԴ0T'EɘG0Q5' NFJEHdF̈́ٓ֋Mz4F%ύΗR:'Jɦ1,='r910܀ 3)!H*})Ԥ))DZ(׾F]ZR&Qz<ؙI=d%,-"mO]mK/2.n.>yuF{4n[İBx-}`On e [f4$|Mu*584_\聙V rT̏-I[F֘MKY\?|YUuȼj)먲T]d DJ7H6V1֠_czlha` _ΦycC* i.o={+ zj8~C򎳹=}l>/MǪ._W|+q¬l%lo]oYE hX bj 4h] CD-yVr1L6o- ,RN[A镠UY`”ZBmA,ޕ;6ͧ&]R 3JU{-XU VY5x qR#&0Mզ:T@]#l,䥧qEcd=ϣ~1zȎKiB|Neo6 ̷^dvDMEχ+$#FC8|-;}*~.??FKϕ(unh:? -[U?TU]>}Ȭ.eodVpm[%AڪvTwvCjC;`{uTb{lxqC>[6 /lQE ]uVQsL{T`9xL"um4`V~[~]ZmO An krY/qoz jjz:芳ƢSl~{9J7M(|wfUʯtxUh3ƒqe㭱a/ʜ[G45/gUq#2r7B gNwQl\Zs,8YuBD()tL; >Gw/ <9v2`pd4(gv4Q8fRdudrF9',v*k8_Tt(8mz(8Ejx+d 9()K[!yzT(ͤ/7FT(l[Uk AVR_&i>tk$0aL#6b(XSk].'Z,Lmj9#2H)+$ '`AnH-QV3UdaiVΘ ݁d᫯nR) jU9JGB%T ՠ k-ϛ_;Tmؗ]aHj(Cnu9U>s 4W^(A7pIu+-j3§?X#)LUG{ޔܦS6 >YOsʢs*5#l!V$s߭FKOKE%Zyn€IiukQ٩yQqC4IȆYe%TT7 Ͳ\5< řeQE$T GGnQBCI عXKD'?zr4We9XV 2%kH6d*Ľ.24St`ќ\&xh+Hpxܣ,ҿ"dVb) , .G6[K+,seTq>&7/fOn1 MrC+лi,?Lwj1:NYW2nR &5FF)RxπjFؘu^0k=jLa:˦4hlB(o ykF/y 3abn|.az-u ?\۔f{Q2m/%ct}TV⇅x 8@Ұq{M2&˼z <:[(,GtRh֞g+YQ$Z?ʮm h^>G SNy#"$*F'U兰\Qs';*YC{UIU*M(.[fi4GZQTJ %A+Q1H9K Hd1F?jͭ5wVY.U6VAIOd2%܁E"+\i@%@ 2硯"fDf)L"g59)lF+bF Brxr*j}Cb'%9Cb7[X7fg ēpz/WWָWwK) bEt{w=4pEI@؞|$˽:֖")_wesOˍ8{߶X77]'O^^]"^n(XJfp$,.WK/5C4%^)Bs,a![؊-0! U}ud67l !SaO8:jTq ⱑJRH .Y ;>)aB%dd넴VȑJm]? y*Ysai! 4^6ŏn 4tw#RZJR3A l6(VY12v&^޾F04dhh,Z)aRHMkܯEuծBx }t!ã$P@PbfYԍc1n@Pc-8mC)ĝ*-gWaF:Y L t@7 9z%G:ƕ4e+>5޴v_sPʲƜ`!fڨ FĢ%#FܶQa[;'U](>fmÍ6ц(U evN=5:J$9k'GVpp:]Ys7+zٝ w/`a-y/3zE Jf5$5w}-((CYJdfL|yU{B+[:['FG&nlI&l G $<1ތR)\pLJYJW3j[@a u+erarSY`|&wp *rQ¡!BT7;+P䶮jU4UUw{ZĆ  ፄӀVX >EPֈ"wSh>ԠA-aH0V8Ѣ8Ý8J 80:"VGf[ M$or2qɞ\&9ɔw>@kK]\SL8W?pFuh0x|:Vww݅s_^ mO\2RVM'htu&2yלH*X7שk;fmM!_[8֚߀̈A#"D$(ڷlbEokh/P7 ;>u20 X?,WH/{dVVz>4kg.cJxKd;&*w򷏧_>S"}O*ҝ1q6Mmw6(غ`ry:Hn]Sp&n]m y*S,>#ź͙E>ӺnCh3W:awkYq{"G$򷉜XJpW$*[;嗣ib=K6'pvȮ_n*z/7{moj՗p2J|,}pw5a,>CSCi 'DҪW-v,fJEŲrٙ#fʀKy{[*R YrV4tkJ!Hz)̢akƴ[צTk5[[tƱJ1^dW1DQXk ?>7.EOD\Aı%`YF ]1+iQ/ 9UˁH|^d <ťʦ='a6g^Xg#M80/v)I&,H$=~ |$=JS0FZ71D"ڀE%WTBVq)L qqFTبdu]RZ[X¸BiҪZ:[Z# FR־i(n׉D jY#҄p@iڃs]{r/uJ(vSeif m:d<%Hm^xۘxWȆ)7hF0U{-|\Y2YYhSZWUpZR zWl ƞe`N?~XǦ8sKШq )#S)dPsd7h-c$h4;$^M#dN=)@x>kdd^#)7SLwB167KZVO`9rE 2Gk:V %֎sH0:qdL C>N`Zoq-(9ls*$е%-Z&Ca'z,H,.D6#t>i&!t-S`|. t-IZ~еtSړ!N߼Fm\O1D:ۄ8]A&ui{'ӻuY`zC2$Ƈ.x&XO\PmfpSV=K*6ZeTKal-wj F-R)Qx5HT~L }t,U4KZ'9nr]qF˃}Fu/:2;K"U4G838jAhry:Hn3f$-5Һ !\Et*r '(YeNn) ,`6 Q)^%gA/2,އs 6lD0{WY0/'12xy$ PA)_]bC |<2F )-xƓ\v4=^"C,ϟυ"^ f"xH/bA]k| +ܡ,^~x+jWkDhv5rey 5'oX \,ǰKkjFPbLJJZCMQIU)+\/vM:Ԇ˸m:dEoLJK6R, ŭ4b (cI(A<ӂhL45 ]XqRݵJ iUV);/H[*"鿄"isԚd(5PYD{s${ܦKZ BvKTAv~:\axM?ND#J;fRBQ5+j]yYhYI\w<+B[0 1yb9V%]H(@xyB`Kh:JȾ<ŞI,h3^$GR*k69y\\)iS]^Rm/]0RR0\hc,k )k^|QkZSXcȢ,CgWA>JA) |g3X^ed dEQRxA+ei4F%ֺ*((!%]C~djĢdV1č m4JO[,I[eEm)ˌ,4/(֚ 8dq\lVtbNrCߢ E"bḔ|92,KCs G]Z&5ӢD2z ѬDU+Jk _Eɰc;n#, .O| Q9ǨE AA0n0nTNP2ŒDIhFl)89'ʼnCI rna者jA$.# Rm` &勬vd`LHԮfH2Kr鍥<⥓@&E?gt [D 8 Yor;mkF! |R4#}4ŕSWHpK$DOiUE MV}t+]*t +@Y$RBZNԾS-ɹ QJl8 (βX84d_vD6l--DbG%KArx"Jƚ=ȐTg,3Oɠ[ה Vu:Yתγ9]XÁ^٩vղuZf퉚{0sjyB nY[TStO&gϠ0;-hL\;̮/tn@P;Zx$ysK-4}T9iG~+AKiAm321.ٽodb:F&H<%6VkkHyߘ*wRtt;v00JUlL0d(d 8t l$zI6",==@R8LXxx ,XC7Q @/1PZSZGK1c/3Jz~f/a%[p$ˑ *4v6&Ҥ5cRMx>j7ׅpu=!ׅБ 8Q[?HRz]tgPUh9E{JB%=Sw6#(GkAf82Hkfz d3ů6f8O+)G]oKI}]ZRsRzEBNdtU8㟷Rb]HH}]S)6qbME$,0v/qr&DHy{^c1c"9bbĔ&K2 R켫TH[gr.zZ.Jn>(9 r&Ȗ ZMeMg[Ҹ>Z3F[=׷?WDC&Bw 5ymWƽԸFIs{K,*!_0g>-iKmW;ry}_M+Ām.푝\W<. [K-+h & ; ށiڅIYj2w8zwpw ; ѧN O'4EmߩSV86/ΤS1hlZ7iWY-qG/Nә,4IZ$ "1qBq)m1pڠHo hA۴O2[LYV9|qvlrLi7(8Zb"h)Z*=^&9%TObB_fU߽{$1an\RuT Ͳ_Wį?|,sJͣ 7t =DeKG[~. t q9sɔmr7FAM@R'@YmBRЧ q L- LkeɓVFv#(q 3 S*WrPSU ÓV!)8NTAn gPr4FTȨ6Ǧt56C_ܒ ߟ2%lnuKe'N/H ~SEG'ѳiޟEfª.Z5qh)_#C_nUشy pOG!,fdVF42Zs j-`ەe|+upӳJGA;r'lvj;C? =h$A kP# 8!ŭqLR6.RcGA4;N2q.Aī!2@6m0ƅ^ P<iމ[ Ըk' eNa?C*B u[CfDnx̛Cs=OW51U$$C$6B3V ˨߰@LsJU*f0S)_) 2ceĘʜiRb T ̤D[>d)ruGNs|/|*b)g?з\?=Iĸ7s|׽aS`nܠ/Ah4_x1/Ow&#¢"on"byC-\Řߞ|~72<^>:1ɉsWߗӖpPLɋ?=޺wyŶ}a?|v_L~W͏/]wK5!z q<gD2R+52ya^*xSza^KJ6z{)"Aa>Lt/F5rRTͦR楹Ԋd𼽔a29S0b)0/ͥV0/{bz^uRrKKKET}1ui!c,=g/W$?7UD*50/e#2NjKKy`,yT="5%g üdF,'uUjMQ^z^ =`c)5/Sk\u;z9z)a^JR†ᥔyi!5c꼽4,Pr^v?7Dm\} B)H$DؔY%T1<@ H6[!'8  Xqwvk.Rnr{{禔sBznW҆qV׿kL3 hW!̪Z\vh?R64ilȝmlҺMتk&]mJӼM]z?+b+Ƈ4dh3lm©oa]Z>Wah49k {{N-@`5%tj[g($`)|IvkQ0-:mX&Nz-DŀRhRVNW=modh/ mLs<F޴ RfY䇨tۏ[jhPtdT85|]וWzNWr?DMs(9 ͚ɔHHl@"F4q4]0Fg!I=!W(uhyTsApƟLmޖs׋M5$'9@'=m| "O_rPc,<mihǑ_*SAd brJU44]2BrݗY~d]o/I_ݨN`_])"~&SCpHidǃ!C+#,b@1$%($À cS8cRSL)8!TTS )ISH"5ܽDg #õ)Jjݻ-*KM ' eSRKrMQd\] ^DtMLj U"N%b@7xS=)X?tD%)F"9~omCpc)L4?J ݢ,dnef*r3U涒K7NS ,Wkj - j@1DZ=9 mO>}`@IۧeGhTT_6PDq N7j k̳ͷ]`hLo;EƦ2юo< Zut0ᐔΫχ,߇!XyGNm"NVy+468l@HW>Ǡ"LN|FKr|B٠[? NKP:>r mʯ^arISJQ[_%奓R=R2m">? "|\zզ,Ohv"G=޳?={d1@*:тCYnkDMN8-O#2r?Bb/͇+դ fF̤釮&a9nEzpQ$|kʿ9YGsY.NJ*.! YZ/&ch%?$LqjK % 4b_; jخ(< 2b&aQ1z X}a[h[>u?Z͖gpDvtmT؊dђ}ėSKIѲԚ%lz_=eX?R9=yxs^zsv=*]xH]Vj#XO]g\]Orm͉z[u%b(u8JSNfNutJX& )lj(m iPĹD}7"q#pC[?]?TB*Ck"(eRo6&os'D)E70lh]hR[kMB./.3se.߮!as9U\k)k,xUC3aET})>at.r΀^֩X-[L&."-^>WT|,|?sLY%'[ Åd|Xb|FwYdүoYҩdzyөzYFuQԻ̛Gla}<$i# Z0F2^l}Z؟aX#s4cBJb#֊XaS,yBH12*ʔ$F)2*ED8 MbQFޫbQ]J.mV'k6+)&3ZWrA쒺֭$G* |Gpܞ 4[ISEg>5R[ɣRg *uE\/]>kY1!G+.76orV '^<Z|m*ܦJەC(ayji*_>ˊiIi ߏd1ܢ2_R+cJa!5I1)4KcxNpF7VIܵoJWb,N?'sѰX=7r'q>߳|Ve'e̶ &Jl\=ǃQ(>$bS. k@M5 xrmm%R4@ ,]}}(Vؤ=om5 KOyt綕W|{b(ޔ*F?ٿڮ}Ym%H"+@HBE4ڋYO6:SEyLzIxp"y/)Cj=)D,RE[iX0ehQ҄U>t!t OVX`)@CKjq_2S#DX3{!H#SoPcK e=nШGDUy[T+/kDggBq Ld%ϒ#g(=1ef7 qc3Jˌ1F9#.FVi8OGG/dq^RX"/Y{ߍz,rJ dWVR']K#lȪrY b*Dj,ok5FeYa]e!€ U%X+ 0BAUhm<}yq Q>+-B_GD:4>F¥4ȂlExh 1B-AS`Hy?R+-էṥB̽fg/u׉x퐦V~h39A$x)ZQ*$:fE Bp8i"SqtzEz4 9\)c\Ču8q7%qK:XZ)fũXL:$!Ţڢs X/!NCr>]שb$ $;dq0Uyu>MSlㆀ:t]I{fy&y{{|A`m?o5^C[bqH<[Ǜq@6jqm;#fNY&e$b6/UR̐"d{l1̚-턠Wɾ+?nRGb {-|e^'1[3FF<5(=y>O+۞E3)yar$ebh, U,`K{68/·5s"A-0B :,K}XJZ`}۱Z@Xk<#xȉ YFC&dg%eByɢХsTeEɸAsYs]ϣv"J`}FTʙPMci qee` /uUec*2JFI'EAY]Ze7 R  %hu _!H2 #q_^2`]bfq_cXZQEwC_^ht`#z'k<;1.[fC. }Ay@ympĥ_jBO1NLu;Z}yv\ʜar)Ām):^Lqgs2xB4n`\9nߔ+G=X n ͢>cH2+N gp8kǘ=70 $IKSWƧ)P81rJ=v45%OTE*h L3Q$pGmpj '3"?V52ؑ4jz@ $ɭc0BKYLktda OZj*F9y}F4h]ʵqxEɅ 'Y4e Qd>e.`ߡo1[2Nѣ䳵$ q9/(SyқQH5%ZL fY}~ / xZ~d/˒qYLe,n%d-FÃSst\ -L"^l ]*t6!]ZT`VU.6V*-e йL9 ZYT!rhw8RҦP$ ʔ."CT$ t RUs.. b 1G mn,M("mi$@`]BI0TPORi>i-l `m8mGKV"T,ztN YQ+MZ=ío ꦺqb=?A=IX2[Vq:tZxzqPEyتXUY.3zaYE~tԋ{rCj$1n'cQbtGB|LEvnӅOrn>.BсOzK;o?Y"#:YDH/!mP EsWC79ĩHݳuv`"i>^ב%k&=~ ޤG7 - Ѷjj!!JSYm06 K>AC{MY[}`pSR])7QwK,6E&7~W,}5N,w|&a\zfϏH5 o̳?]vuc`pH;Ʌj#b6mdڕRdLrÜ\[>|1-1|ʮ>e]ܜ&&kxzuQ‡VX<,޽}ym^O8cKEKAFX=4;?X&8;K?r fc4T;Ԍ`A'6Taq+Ն*,CTaϟ~*i’ r<Ӄv4ѭ_S,kl?7?#N[6jrϖe;+g4hP% ~',)D.*;]T$,n_>aO!9{Io7xվ{ם;IMpw $w#=<p_BSWuCM[|[yso>6y׾ K['7߅w'=f2yƙA6zZs>s?gFkCn__Wï k9{֮n0']OJ@5zZNݺ u~#%M})hv:u/En}x7|g; EC [WĸNoD w >{=oEO) v\?qNwe{>_=_?C̓W(4>[]K&pu0v# I/k\yα ARECRA٨~ #a$7%ʯyD6άG4CcƖnj1^YgNX#H1 }S[BW77#]pR}Q $!H<BV{vx ֓nmDlӥ<ƛZj[h+I"ˌXNs`ϧBUx +, t2PVmqZ{_L݂#G_`E8u gז 1̢oTv]t4ƕ5E\LK^b#| tsFsްN7l2Qr(Ţ-nѓA>?1ٞ?NZ0NV;6<$ ckTWϴ)WqOvu5jr4rtw eF|cn J@qgD ~w笏m_u\!Vu`䓃)Ӫ}bjjI8ZllW\Ge1&3 {dKU>kYkR>}G/pR^ۍ?fk]ҝųo>Ȝ[lFӗNs?aaKf\H6ݪG1 5{.SQ}\<}!Լʻ\lE k襤%VJ G/DdzŠbQZ..DQZl95c.l6 ꀅTu2Hq&R44v<e7R^F`$@e.چ dlEv \H#m3H7:b`HeO*1TdKƥBn iK Шkf1dbѿ>?5;qUtM_>鯋=\:,߿ cpyCpsX) WtT&wVFTHUsf$" PUA^%Zo>ogpaJ![L} \]6M-HOZ=ʓcRL6]#+ł\$C E!0ϝ2T%dJR?oi&+U`W@ qZ8ZzBw%G6$u2`v-HHֹ{ Ի3Kh_mHO=̞) :wzVޗJEmjts(LxՈ  ڇ޷u{hk{JD ~n6m ?'/S:wƛT *n waWˀxpU҉XPf*b~ڻkPDL㵋N 㖩 qڵZRiph%~zt+hVEI$tN_ ӕ<uN9Vi3޵6"%8G2oŋyLdO;$0+Edg7-nK)m;b}⥪ZW:_t猑ڵ{`If\I٠u rz ~} ~9?N 0ihfG wQmMbSFS7 ?086a!Dld{wXAx@8N^ޭsmEw-1ջEn!6E1лAףw Ձ鄎EH3)ѻŰn{6Emn(=Jv@"L:N[QXi Q匈,P@.|uN7|z>ަ!8}5h#Ft#D)N:bq2 wi2a_D`0$D4L5-"o]3K YCIs?b2xF@뎧p n~k$S8IzϜ|Z}V8H}E$JJeHSf0R'@O>$sQ!DKΧ`SVj+MI#Vd:ݎ rɁ0Q*ӀH{'NFzZ}H;#b3s~guԕ`aHAۋ ?r5c\ a}b!bnĸR\a\H4Oaz > AB'`'B >}^[yݴfU hYHSI(V)ͬx!*19e.JD%^ B!~;|>Ѡa& Tq4=OH^ D(x^fHaQB`C \M*$F@G#^zd1C(""eWo@1YBZ ئWR96A QB >}_啔F3|ګ+UbLgX猳*cP@VJdD,r@%Q'2 0S,T q"v^?F;T=h6 !Tux_IR2_r@"enՙQ̸\"$Qi]/ @Qd&,߇e2K\ y!tQ L c-'LJyF@&&%KS ;5?H6Cl )c^3u  ~֧C͙xW5 Q]Vܘ9r5:-3m.o^&|Zص\>[W?̍m B-Ϯ#eB|\zd,}m|71C?|%ʹ.7͛-`9n'b1yzaܛV< OZC˧%Kh?wSYC=Yh^>]pUٷy+46&˪)Oظ9 񄔍vsK^u Qd,qRqsHzĹ+3~;q]Hskˇ]7bDkATB͹tAQɥCBٳ&k; Vz( XCh,I5@PtNɔ#??⵿ey>GС0+Jl0{` ]xQBb_amy)(,s,e\EŖ.A( 79I8ֆ mzM]꼸Ο4^t[~2b} 홍O^EH|>"+n6giʼnfs_-<)`$/47ipƍӖ1\|v8 BA1&w7ÞiRw!Nbh1B5:gP1^/`ݡ'Bql`Xa荫P J/g`(fE }=}R bY潘}Sp )DeR"r%I^VVX 02kBeea/U*xU)R,˼de.@ 2Jc+(VR#QJEToQaьeQq.$K8Nֵ0&ԝ #LQ+zHmܓ!t]Bs} ͈ʸ [vFeD)S 1!@^t]c_ 6҃w>ndH,e& !QVPhK[9e8ϩ%AXB3LSo+U% -t]rM/Q ?g6`)͢s E`{zJ/O>|!H-`c?`g`@Qk46y?3Jq</ (pf ,oSQCK.l Vq>lC !;"-:5( I ~-2GB-.s˛ՏF $EDs8 @y@9Ase[$ 4ˡifߔTB9Ueؚڴxq꓄4mȀ"p c 1XBL'Z9.@ 3eqG)D&YzThp~̏88ڿ/ 5ۗ1rMH;_ZdK# 3??,]Fz˴~sYUZ28"a14Q3g:$Uw#Q$=D+%'RISZ^>.r;o>F]2g| - %g;߳83m>9ň(H slC\s{޶?׹ нD0U'gA?+?%S[R"uPNd QRFYX -LC.4Ǖ̪\1g(#(d E4 9-\Zv3?PD&:UIu躂k wEd;Sˢ JsyI-BPsUaTTpHQ`DFԸckѨ^egyTxT!ZX"Y^[X,婰v-gF|!rӤrec"'äZJrcǘ|q(Ò(W#;D|/L2,7EǾT1iofJKKHk'ڲm uT@omo/FւILP|iE`^:䫋?,ji-8p:w&!l̹ Cڜ!pJPpL]Q+J:hՇrh3yIJt.|z鰸aגJj<K^$!m[n[8FF[fҊ':GeqQ\;˯K;H7X;GI5 I«N1q 5:|ɐz'#;7Ba`1 j5-xvAFB~uw Az!Kkoojo)}Qw|?G:X1"'bl0:(S||0n"u';} drͯVwuxOS85zF P}2#Nz0 j:]Wc|Ez1pr&*T^@'=]GgT#=saqnepsܜ̕nh3A67~³/-z+E KGhΒX>,t ,() k+i,"B}K׏^gv>RWzkB(d n|)$Yb{5}p:]"@dDOGpdW=m6FӔ+Tv|r)ϹeY U5EEF//Ҹiek6uqFE;:`T8!y$=e)=(ƸWC~p6~Y{477[s: 0[.i(Qs-nuԏI7ڈx {j""6Wx|؛ɗjZ<=G@Hq9n{z`FM^:GtR0=Y䢿VdRD-c|()Bݻrԑm#Ky><%`lciMWDЁ&_N& ӏ]]fL7zF7|}3wk>aZ{8_!ew6CbɀЏjc=rYߪnlR*V?2 [ܺ>1`{Yz7sM|_YD8NQ080b@q4%fPD(&AExIXiVwº_ݫmN" )W45ҟ=y#ҌL$i:!#Hb8AIM`Sܘ`7s7G 84ќS2Je<δ7CId-Ogzϫy]3[1 ։oCdFI.ΫQT|Z;凟=y ܳ7Fo?Fy=YZ^E\-Tߴ3 @rFxXS[NV'Fܾ==-n:b${C9[]id.;E;+蚲{!&wC%Kes2s@8H(+ (4 I$$p_A+n)]&dV&m7o䴴kd5Q'$"?"33$Q&qf4#&"((o5Rr8I#}\1(U)@eTD"Q]KPdVJ26s0W,ðl׿r,J=zzj݆ cM.QݥzHޞݎj>ycif/1lS?FaSxhh &jNI)7RXvFTLB}^n7t]?dժDpv<;wQ/*a ~ZF7_ l\.mݼ+jE|/E|1qoDh$nnmxw{>%-}JN9XM¾3tu"=Ob~ݬvm#ǣyhu{9-.ffnn2^ Jn5f(}W_[~jikfdiW`N˥iKO 5z|Z͟Vi|Z\\@ħ@.ŭV7Yt7Z%>dL[P҄'$~L]z` @{nߚ`t9~bCl]3QXPԄ"Mp"ıLY,{qj䵱ݪrqԔz bÅynJ%E%3ͻ=嚝%gi iPYb3X"m)Vtw^1% jW|0Ë!(l0fuȓ? i3Z’}dwbp g0ltH'# QMB-H YK 9^kX!O:J"tCRƌ-p wx\ [F+FZ=\w7-I5ݑtV߭N&m;Ի'8[5hga$pXsww*^w2_XKv\gHY5׮Yo Ҿ1EAb-ɢ>i 0pӯٿnu s}zwȨ[2P@-_O0! kY6c@-(b!5&sv@0~f{ϭ{oΠQHLᬥKs|К pGw 7ln蝺 7jײV|7m7zEA M8I!tt䭡֡NQC-6N-p;C7:!#oS?ܰ'mpb#`0[k'[u)2Iچ4ț`݂([rg3(I|6fuƲ.Q+'@YLq6=cU?Z'Ok@A'#۷MM[o_E+}S,%LO74JN6ݭ W!gfI%.$:uso:e:zN\>?穴K QI c4WD>dҵ@ -ּHAdHRܫPAIiH2%(atJڬEۮRVfSRʖ!զ]@R=bAA1`kgf39RY^|CcWf6͟LZb__*[%叱Nv/JHU=|Wzbf-b~=I>zsX(X3c]U.ުC|}0Mtxwk^SnodcE-dɲs%+]0ٝ|?ŔhZsD+ wwn\0^<)A;+uynOE-BG$&t"8/Gy.=f2n RI)B\`L̖BAhZ29 I)`@DtVcU0Nq:cvwgFw&r*^RF2gsENe]xgRV䴈F\QgfckCt:RB#QZ` ?nY1(3JxC})0+aM[~F_nXV󍓦mo^!0qE-_kVmӒ |5iеj oGf8j-o͜ue,Ϋe#z#^cc1ex`y8LO_>M,)dSpҩ+NƓORtqԔŧggmԔhS1'"dX"H" K[A>Ϋ/hcYp`lmP9is@Yős9Cy7;s̾a ɰg]\%V.дsUj骎.V;7Wwk;A@vL䋷o^mRT*gMU$8q@U@xI$Hy,> v{_XM$;/r9R:yߪ^v"@XhA#mŘg,<U4% Hf2D2t|Mi"F;(| "$tő0EP\Ҕ!2j@RIK".ȓ9v|1v<3߻8a惷ӣ~'G DS 5~12n2'I*o{ Iy`JbZ)?V hNV@fʁ.Z~ZG G$Gk}zxeUG!+c"a~g[BeI;C7ݣ0CV_[-iIWb_M֝ KY}o5sسRd Ax!7jir@qiYe8/=l n <7}+V3]?Q,BKә[ :,'֨1Y ]!8Ճ)& vej lR5m1Gw^BD ׇ9eK']sJ*f7]硱z\ntJKx| NX<;flэQ̩<}f+aPz|6Ze!R7L`u[rdTbBb{f='4'No;KuMwFGa3jx'Fz^g~Y3SxP`0s4uV"4 w*Ga@DB(ZJ"iOD&_Ev_V7 xJ]qBU,-T(;XMˣ:DP>;Ӏ]ual~u!.S!h/;"ٻ4̍:"_yKJWՊC5^8l<eǨHyq΅e~76?0m/}7JoٛOԢ~` EWRҮh>Gd9Akg:EØb|^py>=' [V 5]-ʊ6J?O{ F'X<%Qh:]ˬQ#/nY>B:`:Nl;{|wm$KQHPDH"4"W"%PHDb1 4 i8m(P !FEN\DJXzNaڽ8I\Th DB,!~99a7/ͭ6K/K-r{ύ vM#7/(?ŻcfεNL'fړXc)V:TʘI&F OY3KNQOHKHH(ͬtPuݙjIݣWBglpύ0FS3)g+ՅwF ?nWC$kN$䚹E)0owaTQyj\p̹"- *Jm>,qzxd[kTVbENE[tUb"@vmMU.Sú]qO@028:8>x!ՂDڸP*"0ڞN{C֕ߪDž"𙥧Fp?FKqYL\LNXczFIFoPdo'9#q iHQ@5B;e>w&=}0& lvΦ8G7⹧pV/j:@tnydJ مnӍ+gS!q.6O Y(c-30ckq-eYh+gl>3oCNTyUH^G//\'mʭy)b9YN֩nPǎ]O]dĝ#}DD2܇V>U>q灾>045B& }cp=di4᫙AI/)hr7g i~,WC(S9jeS}־]F(cFYol2!dÔ`ph4=V Ta$ꃻ>w{Eą.i5KČ~'E@_t:׋[d3/%ᩀ톲CSrQVHdLCjR8`<"kas.qRH!yj [x#x$WeS5KiC `a1H/"u'j43  ي6 ۷Ow?Hp$\d]x(3U05)T ө)Vlwys,4]TfKE0?8BяB.(o}n$ 䙵tQsvO3mVHd~j05,Qc,fۓgY.Br̬ә^i-g sT%3'dE&WKnmpuƉ6BYyeM2oGhS9 $j44EUkk .)+:J0%59A[B)eD&Q 0A)DsdH$,$i3c\R| p`(}'@Ԇ:JƮحB HpJ$1 ٹ0F5 q M[N=ePqp4Oa D|3wrZmh[6̙R7gM{t464( ,1Eeg(Ŗٹz{,7C{uah~ZHP6>[u nf ]kn%`1[;zH,/9+~Ob#Tv*ww8)e]ky\QCh+Dl+4J+A2"k:tTCSo:MN÷J-/s%*oZJx,_o^‰OfƋ݉;M*5'V~xOo7[.L+~Oڴbyd.ՁN˼FwFK^UaC+9J -9F4Zw9c)fQJD@R@H{Ph;/!`./ٹu[ݫ*,;C-nf4uAI5j·'Zm<koǾu;gY׍i] XM28_9۶4t rޥ'l?ƞ[u!m7Y u;ʇ[/uReQui=_4e9]w 8!C:!N*A3|MdI,i]j~DَZ8v9.O|X%PQjs+Jjk]ϱSr?et;:y@Hm'#}ZmQj4O8jO:5o0 D] q-w`!ƐfJǑDX ӈsCD ӐTLrMIf=]+~2E2'ͿYbnrIi 洔// ]ZCV8%fn֋sWvQ/Fņvr=$3[7I8A<ϊi3e>xl9FA!8w Ap9=Z^LvVYsM8af[KOH[`zt;w]W̪=Vۜ[Qi!T|qgɎ]$bj^A7sl>᥸;K36 =u@HѢwsL§#{/ff/)N1Ko< Um KNEy< L RȬ|(%HyOԔe+<Gx<1> 4, ]L THTO?(?VȈ#$?S+8 \ gBXٍ/T%cf4Bp@)%Q Q%Xt)qiQ_U(Trd*}:k얳zS3+[|ޚW1VſW:pRNQԨ;^22<տ;:EǧIl&vI҄$v>`;^&]Ys7+ 9b q$pXݙQ+* ͦ<]@.zHY2PGżg3޼%p styjȞ_-ΪEj}&jsmIJcl=9(DqE_O$VΉJ T'ruiҾq؃=g_RNvc@؇t)/HQ\.ք?El` h?o z2>0HkD9iLE@\& KY  Mt$+v*SJX `SyN?NEaw(Y" sLuXH9ÄBfc`( ȝ٣؁W|i9>Zxśs-Ζ_)w@?aҕ oCTrI9rhlD h7 6GhC '+pI+xi [*KxjRI怋R*PUVrC.sDVTtZr_ښBIuOAFk~Y(z2A BHw{\j#i,w޸]tOuXUN/n^ߜ-Io~UY^,{<0wQE[J? B=^͂Ӌ#zh+%eyiU cB`-,c5p+ ie(/"YRe|d,bF[ ZA$CM#(!@$V_0%+J@S~d_3hJt`hnFbs5mD)PO? j4- %q+Bߍ3*ӔQ02:G744`6iFJWNH03(mzo"pdqh*3Z pUYZ*e(c~ίi ez`TTX7,J!0x  EwB:)wI$F걦`NdSOp_x}Ƿ-~;h`Zngkb-w/PtyjYlKp#2Lx0B|k]dfOzi"mzs2P(qQzc-Pnsv&8x˞ޡXWV(biµeyTGwC5M달fq&Aز4EG"{COaS-x}t d\df0m { O(ඊgWB7fR~'~رUŽ?l?~1"xǓ{DGul~hkN[uGz17oY֣;yd$wץ+$!(W:RwV:~8O//(Ǝf ︫/yF(eAyr(wMCg@ۺb4 Ɗ ¬=ލ~>zlxϋ٧G7KX+kyF.Sk\>Gn>^0;Z??O m VѾՅnOb=*jT Q[oVu+e _ކco8:)haJpjbzck훟|o 0V"ud Z3j dl&H~t*O@j-+J]?(RHQ T\:,BY:c* [΀I&]W7{/uS(Oj|򑚐 # K:K-0j5AX%kGtM?~YXMs#ue*QJJ S/@re)QQc [!=DޝPMD\?lI7=Gb˔Bˎ[\wyYX;^\u/Oإһ_.]9Fa/S6?݌گXK6<׫=r$vєxL[W) p}z2ݟu# y&eSMcNnvO-}GwP̻ݺnY6Usӻ1Lbc:Hn#"*nn]X 76Վ'X`Nߝ"X1QH$B]/V,ޜon_&"on߽1xJ\@h遨6QL;}~ؔ)g ~1n,~Z1W }|w( Z`de G`nxǔ!GǨvc}0{7 _KKgԊ[n9Z><VF_Jwi,I x p8Z5VlMyъhTrbN(4j8-D?ϷG}F]{sQ sADrFcϕl 1@ysu}#R"{pOz@,ANWN0(J 5H* kJ ()Xej$ #am Jj6R3jIRg_^jNcVSEl*eE^( ɴQ~mH0YdXRoDJhǡsY%VVAP6aSj#S i >)$ #zUS!IЅpͱ)Jq+wbĘN;Rۈ/Xvn70Gz.,䅛hM1:w+[.16)+=whwB^)å-Q /r$i(-2Jg@H!+]:65 wPR+"Vӂт<,cpLk%ͥ?#ރ<{UKٙP8C6+BQ m4WrR 4ĕU"*3Z pUY ])C8"S_ɜ{I&)(4f ЪA=Daw(w柭1bUKG%HITcJ@c~V(B$hp!/t|? wLrJ@n, \2\UK/[o+><\ h0४=KObtUQA)J5Y8~LП߲9FQv*/ P_;6WpWvÒd 938(f]QTaV*pQ* BUY $9"+5 C]#>lH]gJY;e$R֯'D$YRE!-wQhQtX,I+S[IQTy22#3PV^ڽ>PUEm&JkQ8 FPBhHʁ)YQ+~!=c f.hd\|ׁ j8Dq^H6YDPo"&wڗlyBCGp>+UBkKUR-~h_-W;|?SyTq$90˚C;]2W!\M)ז|2S$iŧH۰lԇK$iKgåH{ӃkqHR4Q)>"N=HcD\%Lt.t+8c$7, DAr1')c1;j!@13g@4y]s!ADlsL>}w}Bd-Ӝ fC)z~l} J@$[]+V '>lJ͙Sh -)M-K^~9Xe!ɲ y&eSbwS trߑF{y̻_ݺn96d;-&Y.QH"*#Ⱦ2  y&ڦ񇼮&bq >|nNMQ>ͯbƴ`Jq:B2~ؔ1{yqQ|^qJ!_KCɣڷ̒5{onyH\Z+8{rnة{U}aק/Wi֑C% TI+ŽNUiJ@:PI:UImJ(6Pc6ءZ=Ϋ##n" HPJ!PImk>]SBkYZȕR>}Wt?rۚP*cU€Sc￝zi`?3jOkqF"8Ռཁzmz Ńdvk\Tڀ}ɔRT^*cx-*3xN0x FjDjDjT]>M'Qjrcw!8-$9βy@c}>G71g̈\WJkev1#l#4zğYo~3sr7~Ѯr'ZD3a9T~3)h˙KmʗcŊC- @\s+-w*d+M9ﭧ  QY״4AGߵ$[3/qeX%q'-7ңiщ3.'\Vq9/_U G|MB<6wWHBM0ƍpt6_ BV; ![3u)ɑ%V?>߇m1*V]E -+ʮ䇾BxzeZzl|e~rq١lRꜢϺD끛'B ΅Hk?+ C;ғf('4MV"r:öbZޛv <9 w.z<ɒKɴRl2pdŒV$Ck2@u.{dIbLU_Ee zHlx}]w7"t[jXp*sV/-DyO0ۨ#Z m`ǼS9 ;E{kG$S,]j}trn|"UhrjUsy\5\UGdkshJbcic\6RTVw&agK]hhsx!:dSh^jt1VSJ>L~$]|[Zyi~}_~wrLདྷZ[\nujhD~Bˇ0~ O^r=}n4]TVYz6g"ol5sSSC8"c@2ku.ȋgڶ>yi~t;_(ff + Սç:|=Ee=:t|*vq4 Bcf}a#~5>_ZM#X*C2w%cօĂ`67HÔ  # SS*?ZqsuL$:Zn6c4\$51ցD76v\z~.nֹ i8asI톒z ~<>?]mJztUDxOWesi;>ےS3# Ǣ !6| O1wdwр^#堳[OW'p YbcczG$oO PEӘ4t 3?! 茅[LlmC\y>8l4g괎NKb`{>B*I5m?comB]yqBO[H xIegS &MY^[MISϦ,hlJMF5+v*8duz'w5ͳL"jC?NgD69l28Id3.~HduUwV'Hxڱ]{p!383 plF\ǯnR$V]aq~Sn~hs`k]~M}Ȏ.P:N<4^L'J6pR2 Ā Xk@$f%8X*kƵkR9` ّ@PuCBiOCϦ_;~]Wo+y\X;ވNkQi-Q3oQҌJpIj#E[lvl "5t|A ޠ.TjafP:q$: =>EIŘϿs9Ӈуleջ -ƒMf( քV`Ǹ-ڍ~(aޕ;v_]Cwl&MLX:3??E Q6&5Z1yu+^p&,JQ12LibElZᮒvbp,cIfK(I̍}\Z;G}Yo]cKגDl7<ݵ޿j6t$ѷoJJ@njn7`LNK|hT2. 6yn9$DJUx:%NK[Lo2M|m%J/ƤEZ&ҷuI ÒNOB(l;^vj y<.*Ӈ_&?F?bg̚ղџC;C81/?]~_.~?-k`=ZcX׳{=4Y\n' GHc5Z;"'(@j(LE 'UԼJp1G-Q-85M / 9Κ ύ S1qKMWY?||~u@Bk |-A 0WJW—Y t6_2r8J*"]X1*W[3u y|QdyQ E%%p5zH~hB c.PF{">&?mU2Lȅ]LB9MK2y}V߼ZEuAJRJfjŤxY{)i^uK+yZ}VlZċSbr=>KgdiJ^Ub;EAx)4/- .=y{)MTRm&I+~xbl}]4Ug*~غ%!ˍ%!FrZ5R\h;S}VYJ,HRYj얅ep js}lq޿$'GU t]JϸSڅicvaR]څ-j]Xl\~|Ik_f^Q T!5#DMߚIqjQŚ&3}:ͺҵyi ̔>0h]{ŚaZe<([hv#sk7̦BhZe^!^1o 1c\rY.(6!WmMbSY'ԭO*}ʙDwy"Ud\f*Ȇ+  Ɩ""]8b MXv2F&R kcSG㻌azSЁh_,F)zMRY4hW8{QqubK@\s-2>IJҏKIڱzeZ[Va."D!| [rfH:L2{=-WS$3@sR)N( "2k fN*M&)5D42x4E[v>WsN]ɖ Zg B8e5M^Ginӣq(DmףX zeGh̄RoH3"1D8W2M-E@ٸv*Ga"UE!08M.L_.?m$mrKB~o6QKKI^ 1Aа. "&P/$*Ba&jn~>.z5'aJHNfujL|-nhRȴט +km}9:NYE.QоyA'*B1An(w9e=1\ˊ~*yMѰK\Z``{=l ʈ hM\h/LFB?"P/gWYQ8h^*ce"hͭbPzTxl/l~10wk4yQ5Hu$_*1IXs b/iA(:2(Ռad AJch.,jcʜ9¨)¼Lּ L=V-{A\>h0J[̅Pa+LP2 9ۇ;6.05D` H:.D|BZRXӱy3YlL-=m+:'+9a<+$; 5B*B5 o$9ghhF+ r3sŻ%D(X -tحoOSxNf 6<#T= R=ͦש eHޏo}jR A*Y*NId.T UYӟ>ت Χxü4`1V[͜<;y sQ ?PڵP34!F 2D/5JS`q'.Pj5FR]  BrƅDkrxHkq 3w:dg Sw TƁrwPQpF5m[6{/fv]Y>blq%3.o*1!_ެ'F>?C;}w~V L|8taB&>A0a-VA}}vUg;^~ +gKkGB^fɔn:,vmN  PqO1nRև&ad|o :o.eR}G`BG1.?H ڐ.e cJdG߽NP'+mdq~mno*gN4QC:IRtxYLqakLٍ .7l)i]b\唰; 2[Y \ 4cf+=a./UQ6e%Mn_'90 I䆺-Qy?L1g]T_Ga Ș,]_?|{R`@օrԜXfuC5/M=;.(,#ڭŀqUiX TU >=Q`ؖ2ª'6ߍ115g? _=|;_^u;=VNm0+9;ɞJ@\23Oodwkbd}7J! X!S-?] s椼yz]0u1核@2s\8EŤ8#qI$l2{٦Zg+(~6;gKeE3H E_ ńPe_w:9n`Te=\1>[iH4?ƱGB.{N'h)9!:enH>[zY a <<o 5Q$B θQL`,gINtОd1Ir k*8d4 q$ `c*7Ob2ΗͲ%&F/&=;1G'8Y$:h'w7WW#q˓O R|n)O6y8yÞbl8BVPP%e"YG]ٛ X7JJvRlţV0r6J%a(&icZpJA]6 n'VG,Su*6DcQq"e)Ͽ5.)T٠q'K^WߣB`*CfL=CN?N1[qw6Iu}@( d~5;ÒAc.:$D cI"ԇ-Eumד|pU;}5lkUCUJo}HaV#͍V#Zs46I}5h6h^#j^QU+% [ѭ|6wfn#o*4֢ler:܊w5;4j7uۯDP ~:EUp;dAT' 䝡0x=v_޸4&W+:&\mjGؤ9ZI$;_;\֪IozY%ɢCF/w~jIA{; 9xӴ34t I>> 1; O,0<-rlk͙=nѽ3p3mDcDbS:94ku8˒ [mw\]`aJugb+ }O(y#<(mʾ C{gfa w<gbh]W-_˩+yn(>N^;ŤH;Ã2*rZkįh>"ґ`cf5<[&VRI)5@&#(Ϥւc;ZVC+]e2=%Ml@TP,)eE\RO #( 8Lqf8VT*JZYۜj4'.4uP y[xT`cδs0HE)L1cRjЬ0\딣^MS Nak5ǀrVh-s98|aѱT=v8͠ f:u~F5F]rMs1H :M#O+4nro$69nCB^Ȕ ux0h-}GG Ǣ9i쉦j6$䅋hLʉf-]9D#$}ԯ?ܧ-  4HL9s A82a08N:ɍ1!OȱJceby%-39(mvm&vO6:g3gG70Cʌ+l\> lCc/YF0{MX1k@ɚW3X.!kI!$O6dMnF+*G\Lw8,Tu+x,:-b+:޹lg9s, -/uݕv7xdxc 3f!=8λpURU*}b_7x4m 6CF[J]Ag1Rc1cx퇝y3Cކs٣7N2 R%fa_/2?IgY\t^i!y3Ij $> ?8wfq^ $%EGoi$۫ qP$&]'--8e@!xfOv\'V}g咾v5C+<@s/p&B+`:Obb5]\zh/Ū@e3AWqz#<_JvOsIGa/I%oKU5QffT>>^ѽYnIZF@h3DI:/|3ۆZ:Iݒ%~2BĄwWdt;\٭b3x{tLO.իOb ^Rջs&/-Kܫzx+=fp[_\mՎ^LzyN/<H]/;k"XҶ]W}1ـqɡdHj45WqsW~hH:F`:Gs/۪`T*>fq3f?< ӆ?EUir{ؒް빬]9-."]wx=epj%mQԍ~o ?Fv  atX,l~O~-[HA7jv΄N7:EAֻJjtU2Js45$7Qb4RP&H,1ԉXzl!La&XGx9J-!jF+BNk*ek!Rx7D6́В(Ɛ`PHG$HI%FPae gJEFLlV:6ˍf1O)4Ԟ}G{D1CbM,ކpm.S-Ʈna6'MIۮi+U(aoo]CN"j|Jˇv8a*h`RugeZW;l뮚4A`; Z0ą:* aۢ`c\EȜ hgo`hEOZ.ۂze_5S^ 3 [q 3_:iD6ÂX"\I ,a3"M  +Viؙª P]mrϷtZJQ4yOK8F15~9ֽ_΢`'J/G -Ĩү{75ȶ|!J[oCk4w ^vۍ.ep'?_\\xmr)Nz8~ `v _~9s;y/׏d51_ jqnU)ug^G_ ZWW+ܒu7x)cdJq!t"r-ZwE;yF%Lrп7A>X]4_zFX_ե`od (]QΣtWk;+as8AjD  aUN* +A5=U$tӂPN5'P!|b˶\OPЈ]\6a!z '1x!-|'&0MMaXc#B.Uy$"UEGH/j5b E;FsTs**B 9qГ3ŖEdd{eO*H=3=Iͤ`&D{ݪՋ9;03;~v3%[[ze2K)OCd,wQӃ 盇u}Kv1j„c2A fQY좺 CO;_;WfZɳ_mveYcfY촪zmΆz;bەbYn?׫f(F%%ǂ O BdV ^Ag6WN̗@`~gY;/9ܟnm0NqGqvܳMfm6, uO`D(JvkGe.dqŷlX8+RgAdjX}51Qv!G-[Bl՘P~jVjcSzf2Z]P#(<bo7阚PBZ廎JtTwl9S<- r&%ŗ7ċG4Mؿ?~rlfH A-{4ϣuTܿ2?7ι[#" \oLy)\,v#m1(Ԡ.Ht-D4gt~|TV=1ͦᰗW6K0jTʖ"+1Ĥv}앚4}9 0 -Hz3%ɈK:%F)Mu)a,0J-:I$O"56㝒A?xGwDcR:o CO/M<1\G>4B<&ӑx=zk,^:rs*:chiI+?1-0+v'rF> sRLYxE_W.D{7F3p31gc$-#w*֖J8Lh:l䮹ՌىWz5⸓=lITYduOE)^OL$4")G"I"jℱgqI3YRۻ"('g{?xر%81n-ٿ%.\a8Xʤ(M(N0J5e1%3B$Ա!3w}{ Q%[Wװ`-%V^b[&q!or;Y8J` |H3N$}8 q$iRZƢ{1iY"qGA){껲B({Y¾^\~ KO[}WZKF/D/K1^ }z)Vͣ^݂%[ !SVߕVL^z^JR|v#)VT^z^Rn3<2sǼ4Z 6楗9sDq1/=m]jK/KtR,mTr^*I̾-;TR&0T~Ijē2^gF at*De Lho2Eɞ $k/f8۷vY/e WN 31_ffs| m|eQk= `* n^m@6R1yCs}O$,VR{&QG_9D(?!B.D{ᦫ$CUrn(|QW0xvm>ORa?C}ǧFݶu sX!{b/Ha┋BpaIf;_U&9;>!wOOe;&3GPIf/| Oz0%z0/Tz:9h]$ 4P #CizB?kR1& Zi]" vɐ%ݳKtȨn@F ȨsR =\1)2sFAbeӡ':gD nV.ЖS쾀,+/#gdCtHZ |3ECbȞ~WmvoiHA~,6W?[; 0O*>~wTZE:Y{De(uY*Ji3l^+[[RڽVP!*VѤpe=\(+ZCI ! +KcU=NrSb~/՛d[auw!.a)h m~W1IFo;K+|QIZfsL>=YΖfݘ<@ ^$.?|pBrwr7YC+GjVY zi}O cbQXԎOV֟6ie^zqF;z+(\۶}z4=2?d˭׳Gp mbh(E)gG~%;2[N ),AqAh#:kV,$MGg1YՎ~<H&)q$J}`d~gu=~2 g<1 gy^[QhewK} LtYa},HIN7혤 ;%y9~٫{d})R :z +`ֶO0G٩icNDk? u7Uo&AWfd߮d,[60|l7טFIL8ezs?/2;AKVcZ,!E ]nE~b׶!zn^-cvߋf Yx'pM4T]f?L^.~W1daُ %Zo3>P[Ȳ l@'9mHmb%9P{)Po5!߸z.]|1pqoԉne_S):nѭ ]çBjAt# !D7_ \\u[fWo:nQ[>%`&)_N_gqNXw~nB=dZ>dKG'vfi2<^]zk;9wZ}{;l}[\d!k[}'U\($1-Lh %_3Î;[mMTlz`d6<ɿpH,,EbCm P^P)>8BKAoZ NHQORhK9ta ZcT'7Kv1Jj ad"6e)$VzS-(d)\U|]>fL)"D͑:&Eԟ(y׵{!A*Y‘,iBS5]j11H>t! W\EC f_+0eq|J(= {'\Qn`O!*A$",*BNK\#!҈8aE!Œʃu!)p3`B%Bx Rm` 1OVJ5 GHơaD$)JpsZqd0=@Q9A*dE)4BF&F% Am(C4\pA2^i}\^~_EBFP/bk`:)NDDx:S8L)% >(*hWո.ZXq`|\e3-p.#kj桟`F) $pva2 qh @>1JsJª"Iբڕb OVC' ,Ob? ;`80cڨ[;_e"@)X:P?[~s}N1icSG]{Wȅ17DS\R͇$m]@&aNczQ ]vvG4OxzdM]0+ +]$z{Íw2rԈɊRPz>#cJO[z(XЫ< Y W)NY$^o76lK&)`k aH!P|O>ŕ5mhćM$HRdmfbZ>'Ӭ@#V7Pdt-A`x,'h+ė_Wx"?̻B݋pr@||ч3B!F4ANC]([Ck'6| , eI/C,-ځVFJ?9oY.4i`ox"|qUsvOu.=GB"QK/ Pn!_ӌQC(#%XXMz9uDk]NAz2ȮuqfXPhg|W*OcNʝ_{upT(2QlCχ"XцQ<{a48 PiH[D#mp-ː33 iNi ((h"Yz4b0vp1 n“9iwÌkk_ \*ZjyzYҋeGCAU6Dp͞"q8ol2YKB1XsC{ s._zԏ]# զuQO8WC|{>K*C۷ga^_'-HO6̻2NVyǦg{##L?G7Ѥc>^fg?9I{x}t#cɬŸt};fs! w[|'*r}z))ߪ߀oQw>?FߠpQ0Uv`ĺ6W=ܤ=VO CWrgq$i~_5F\!DmׅF8 t\i笆Z[~͢\FJ(̿`σ0rnf<+raI0Q tW)(rٲ6:8y'--pf΀g?c'bA,m-7jQF {\!oU9GoEQ{\>|pCߵs:N=WPtss=vr(3~Tb`b@3F Tg6[*93z)e_QJrg4˳X{3{.qOI.nߟܸa;3TKdsk 9LS'2Y # Fe#uI*ͳeb~u: &,Tq& `@!)k С0"{`V\ 0Μ5U^VuAR!E^Vu҂} r BO R* "1>Ka-{wM`*,Mil];ݵ3t? GyТt'{s; /_^d]&f >\~Qyb߁ Ɨg!-6gXV8I'^ ݪwDWr]x3 wXiN4h~ fS~$Êȥ]M`$ D8H^0Pت@ r֬W+R&ijzD,2yZOk;Hi̋p庘r½sg|X8g)Ⱥ04z{=H 4Y)AT۪ΖtC o![ͶG/>IW(i%f E2?`R3뇕!%SǞա9[oM-` 괶`b8)"!EY@Z[6ѳ%c"s!ʰ>RVlmPѡζb[<)N6It>Ğq~_٦k7QA|f4.E4Oƹkv w&GЉߗ;!Qbr<ct Qo6[k߮ܟ, u9u~lL`bзkld_}F]T`xA 0̗X1m-_*MhH+"!R-"QJf zM^~2 rNiCNq:Cl n7V\F=Ha돎ЌҒo6NG@ٕa>Z8vþv]lG%NicbwC0Fqׇ#$@lHHR;x@xK,~) 0LaL uTi`M78CCGF]Lox̍x~/xd/ٲlQn< `1b7nJZڹr4>y> %j#B4]{0".GK4{#`"Zr[Z[ ;`iL`b -1丹Yb͹G`>>%@~lcķ|oBT쾳~beڣѢ<4^M'6};&NU"Ԩr|{k|?켓sP Rےk3 u0N\@YRS̎ K" V'UTs/:yy֟sgaދVԠK q5+"YJ+#[1*(5z\7%w?LI2I@9HplD,`H^@IXi(l*BJ&E~d}oe-~Y҅V"1B";_~ߩ-xZɳ9 K{~S+G#%*e6B@4޵ƍ+ٿȧݽYdl$}37"$*6vߗTmEz=yRaX,Vm{AH.e- uXM >O4>Õ<S %Ol?D8/g/!<"=@QJ2@,?c `D%o흽x NkE&ͯqnPL0 TjYB~m>ȭ\Mg ͤ;\Ԍ^ 6Љ C˪Ԓ KR&XB8/J^V KRRj4Z-&6 dv Ms4# "wM@a*dJU\%5bnzYZ 5%aγT1^V:Ky KIkUjW@8ofWa?~;-rWRJ([ImNr;G7;dk^D5G͡Ki#rko_΂>z]8f7wYʘƠKl7lzaE-ŨG4`(;/0Kr{芰@@XC8~:/ z߭8L9wkB\J)CHL2{SaJ!?xK&9S0;WkE#Rz"vJ']W}x"fr|cڊbUp.<̉`H ]Uu4h*G$kZ@.A6$m1ppЍvEh')M`64LdrȔ6qOgxȵJ ҜHE*SXḆm8W%)s)MPzz!4VDX,CI9-)ʔδv3$ii$ CJV~:xUnoԻٿ(ȵylcQ8*b@1h%Uqs D +f FLTysHIêi-QC5 nr[ -$w6܏wn7⍆QGXd DA4;r0nY1sEs<Fi"iNA:Ć(t]Nr:S|(A9ZS&.v)N'J cܤQn+NY*2cq˸@X,d<SqM4W,T'Ʃ%L"CYʝzI;[PSG'/F ^M·끮9YYXbgнvj ѾѣBsB.*=ڷ{.ը%]XɝGP&ew*OTfyEB)m3cQBVj$V!VMgiJ$mXL:CQ0u9*=qR Wپh!1I O G<8AEDԹ)st (RT~{2\ @7Vz}^,|3=|.f/fשy \kчʳ2PWb7EvrW& [D%ڜ'TV%͐@9RKeH.8RQ=%*"^N/u .PnR<&Ճ9ܤ~-&gTo5if,%[r!PIh10Jdc+'=^)  +92^}Gq/ Wp&2p/zRcW*O0*d,,!kW(=RAr oOjT/ҡ:PW9j8SR/R\I,ЈC#g?iC:cЄ^b/SVF3`-ʴ%f/}@+mCK9 tQWc0.E{:(g pDKGovJCc} jdL.[(MnNay3läf`drjSpa+նjǺ|TgkZK[52mU^ݕz\Utt^:_k !iyߋfgjZj]#~&~OjbXh?ᝌ M milj&qs}yK&/E-kc'r9;t.a1N9jϨ Bl>ʠ;A@lF6Z R ݏ .Rکg}[/s&sY{ײtPo\ Vq|4^G~c#;}(y,r^d~r,̻ٗ:pu6^q.i]T9k!QjӾu+nP'upΰשެ[|luk!1duSTGn@AubQǺ PXiۥu.h`+hcN)%)C'ՐGnٰ.?:70j—Mf,tpco>}pOkFݛ7o77z;o߮fr*H.?%*"t<0OBJ_~Tyk22 yy㔗k$|oc5!8om S톚5,uYj̣F,h6}>püخͥ4ZCXIn\[VhɔŁ#l]'}Xs6K1pbdzWBjńt,U?RIp˪ԒKi4) (X]u+p=MX j: ci!B.,4K9 c)/"Q((XiK %ˊ?maH"8 "!^vOSf~G"Q/c`iUTΙ8KNlR/R+I4oZvHzYZ }aYVLUȈ8r c:ǟ6K2snYQ4,n+",9{Ҍ3R/RK).Qi4(_<]. <$KAJȘ8vyY ڭ  m>< b5$ 0#"?7Jܚ;uSNS^em9ff˥1S."rko_JoRهOyxzw[h*ѥdTRf<+kŨDTIC8Fb{QJlH4ĞZ$)xI`ծZ(Ѩ&iK@)-0I܀K(g,p[}/k!?hTpEf`/R9'qUمO5>-cdS^Gx T:3!9,gοۍ֛bTnn%s2sy":HB)?vr1JCP@EgY%js}Z8y)#gU.vˊH:ey #GJi v}?\f} 5oڲ.kuN5*=M _=@CJhD+] BhUmI MOEFXݏFyR5O]DqPRPM*=%\h4K5I 2 Ҕ[3%>{CL&OJq1JFb`ݯ"c` W %u95j0*,gD4Ͳ 乳)f$a(I525LPS3HF$a0"YhPF-zGZ=D~Fu@ Bc6:{&6%)9*e(Imm 8@[h jL06O1FD dNn=1nG~E穭$ʼnϦYћ~UaJ ߗo"xŇ-׷>9y q>gTVF>=[E%OnEЌSɛٍyXuk!򽷉#Jjfm wwM~X6r߁i%|Df^-_` *8n)m)cZj^Vک?]YsdL>xo Ts[W.FJźh'@/ 7%s7V{ؕh{xf`8@QxW`ĭzLZ @. cYr6Mp-')vpVm᠇_W[:777Ob!F52 #bOqJKwӕFp H/kr `<ܝ5R%:w= )ZP-P7ml;t\[4}B\j. ͇֓-ysJ10T$#%^I-%E7H ZF~V7tDE{Fvn!Qbj+uպ5 AubQǺ }9#][D@C^9E8=}놥[,!ԉF6.Tެ[݁@C^9E+ I.]l:Í5=o>t}pώjFݛ7o77BPxەs<!頊fWf_yf Y,jS%j/e)HݬS8Rr*Rmh9Ra_%̔Cuꪤ`0EJTѕÆ#Z.w)r|n-;m,~q+]Io0*WrnP:ך9*M O'CWO@ZꤕR.2Ͱ)9A :+1a ^k]i6T {}ˊӀtZay%X*Z J  54"/ Sܐ;G(:RR䩇pr KD2f \ens&sn̴P̤֦5k9fS 'M\u4FAzWD'>-!MNLD;84FTgR@j`PQ(*mݢ][s7+,lUv­%ݗM `k-K E_7ùHbsF8tY.so}Pm9 S\*"riZ`nB7V)2)Ƃ17A6SFy$phT.UZ}$jr68PG05!EX|oۆow D}k+ʱ@񝐇D8˥ڔHmL0^,Hb(ҋ&:Gj"! bM"f|*|Y>*犘UmC,UgUKqdIR)托U@Esa+B`٬;h$~T71[X Mr)%te< 2fFuOb}bxLĬ%`f7 Jg:=i;cD#`p_փvff0&Z/`k Z] o=/<Z>4ݣ\㓆kq0+l8f.!_,ok&e=<~Y#4/߳UOjt_֌QF[k_65!m\lL}~G7鋳9erLzbg{:mj4u'*n?췸 )hO^4nO3zq`Cq#dOQhs}LBhS{ZXa.a~%+μ\dlj\ Ez.b'ol A{3x!N\ |2ω6r4'E$*qMViE_U뚏0[c-ONkndp m_^3kQX=! y0AFz"=w:N]Cum8mvSՒK\]E gip&WpPHc…B Q|f;QCWy;QS@tz5& ][c;6mǞ"Sg!3Fj1I͗ª([Jd94D[D9ƂT{p&ThV2%.Q^b)K Z), W9Nύ40ȡd,HT ݕ+6)# )"0Hˈ]TوD]KSv25p1V= ]5z`31c%i]mjO٢K9O^ǝ;;ye.Z|v*V Q=R]` d_ o}1c4ϒyGsi^A;CTcCy9?η5 swwkc(ܴ}31w}GMTQqung`ru"t?w:+uXַcϷwLy;r`q_ǁzG 4Ă;Iݍzz7秭 Yo;mJn:2Z= ܆)Bt]e9v3OC;6tkӞ8dF OЦ>ETr@H6LSL)\NdZ2 >bhZ,w4ŢBÇ(o;Ⱦn%y󃓻"O:SF)O:Gb yÇ:7MӬs$S[9Z|>8ER7jvI8㢽 P_Mm ]N+`Y+J}]w|*|*;Lƕ+pk? HЫqtwt~./oU&1ΞfDdLI8ύ\eǶh w#بIvB = eJ"C-9(猴Ah!c\P]bnE GK{|uVȅG;mjlICW[BI)d_׫uӺ~LPJM',YJ\pQ;-3erͯ6.]dԄuJ.)ix`֞6irrXTi?ߪ 1Pу<si92mJ^%@Y*n^/gddAcN[ DR0Q54F迍:3V29NI 2>E㻫K~,l:En&ퟤ4 &&&I&7+B8)il!0%ZƠ_P*#%gcaї ߩ=T drSݿV.߻Eٝ#?m,49$^yY~'y XِUS^΅\H[Ws!ysT.nzvCEDNmwzv8gP5!=ЫRQF֢֒K$'Qk: :8I}@k> 5P7eO٘J"[RZ՝3:7{Z %/cW[M$ >& 5xnʔ|!]KL 1EYn4Aw<>$X4*x7bb&W:o\z.(Q]eO䃟^(Ͳ/_|]g\͠6B뾣FJ >=cPR&$Ew0,$1&!x,5 X}zn6 ){$_3J8F^ݼ[tLZEp%.\/ɜ,['nr#+qr9ixهޡ ]A@s{Q4V7DPG4q&?$A}%!h_ vr!扲)cV+r\,Hpybc q*ST[ac-:kr6_./oJ@a:Ltv[G T-Z C*5KHHG8,W12-tS!ȯl4YY紓={'{#i'{;pеl(eflaY-k"ioNn>]N~Cf)-5o=y#Wr*5ԙgW>bkωVaD2o&R C4 Fh$aX;MXxԳeꭡ Hp-&.  r5 KaT/ZsaL(,2\)$X3:DV]i)E ~kNOO5A%P.ONA㭌^(T UQFD ՟ 柣9jk` T$%Oi9$Ɉ6H@RrI2Y}t½3F?a駾XU,nR_۩>lx`p**¢sZ€e"k Bo@c*Aj:BxJ o%oYθضl.CYIX)A-pE.EM؜%' &CHhHxS=LEK(ylU^@Ѧ$!5ydJ*֧LhMEBu 4\72% @Ԏ"2[wt\@<2q_kR)U& ޾ +=*9(]b=xhvn|35/f_l:Q6}5$;J<_ GxH(. !*rѥ@d̋ ZhCW_C{ZcLR6#BcL۱4}\TUQGlkcM>3#sH9lFBrH~S$3C`d3I˂L-BQ>~l'lp@w ?ۊ1,[R6r w_xv޻hR}{k~wpebB[m(흜~u\SJXvrι @+Rp3!:_턌P H5]>$Gdtn * , "ΩLokmwA3r˜[@T7OCIPj?9_&RBnDJh}C<#Fwh 6m{P+o$ɳa6<~>,jIŰm%">]uX {j |g EXאPT"\)8#5  FKP;to|0 )gJFm$geCImM&N6"9ׂ չ'hp2,2ݸ捼)$Rx)HZ%=F 1%`3mSh`ErKDv&D`Vz;WtzsQF+FU(4B9+'s7݉Ar7/=Py$Ax 豺ҝ%uI+"q>]Z4&:{3ޔ|O~ZhHkFOnLZq0j`A>LRu`a%Jͳ&jN ܄Q`}SZ!Qo Mo|CXeV'+l׽.~vz7v`H-Oke>$ ڈMBJ WnG`'-O_qЅ_9Ie&L4we,ӿVxJz]*S֒|"$SH Gn4g4ndzd-}FvkCBsMy_jq>nSn4g4n"TiQڭ Etoǵ߉qj c8q9Zt_nGnHdjn ר=7b4gPJo*', >*[㴨q\e)Y.5s(+d GH!BYOHa w]M-v8$6SxFZϛ,kx^\Bs(]3*?Og) `Y-4>TwE>lχjJMY>?smQ4=~+ ,vO9H>+,#?|8m;>Q)% WIdURЌZ߃k)(J>"Op(Jڳ+L,,6L N Z'HzwmRƭ&Ԡ~6EAMƵ!XdyAI(dSaSy&֒J%E*M +njlIaSv7_dWϲߤcndΛn(=)#<)uԵ,iBQK *y&RYSs@X9!ΚᮄlKc^V{Z.JԈ)r0_OV=kpuS&(E߃f;/5q?]5S*Gfwj[@nVu%jo__F`UWw.hG6TNŃ CT8{*s]D`hg@!=H#O;ǂs!wȚ7oRm;)cs[DoV'F~_ PZ$qTYw%^ܔGBa K4k%{m>)2s$nklNIb X)%Dɒ1 Bi`"]¿BNҒڪq%Uyݤ bGK)GgI; %,i~:Z l WR"}JԚ9™lf[*nm. d?;Ξ񭙂_lT| t ?C58u;J9OLA(8myf|2vEv4Urav_,4T " L 494銊[H2T22ŴOV=ː@SBeȭ+qI?ݧQ^hu7ǂ#)<=,8 ABP%hw4݌Sf;l< \*iX?u fQ*/?՝`U.Qq}h[岝f21!۸ꦧikbXwӔ ƒGR>aRoTvi7瀥)ȩ1 dGË8,p Q/%;\GGJ@e=|BHC|x#iU~[,9**!}İv0_p5:^ie>i0cu=Vue#\# >=.yvLj7Λ$3HR} Qؕ<<K^\;PqV;hj:×H^Uhlnb6"Ĉf^Q1$i4`&UNBыYf~<͝ P?jVM$A-RQH!}z܌b$ĒA+Zr)%FZ,w^q>˹q#|hP^ߎRH`"x4_l{22آ]$B@t1*F!B)E33̠s[m5&+}"e Z6f\SD2x5"E%'b&cB`t 4?ZL%qebR+ޙg(wڳ/@W}^4p9Z\0F͐$hE}]Cʓprj/Gǟ/R},6TҔUt/gW֏8V׊ %e`f^d BZS$ۥS< h H0>Ƣ]X?O #?D0A&eee&Ni* ؎vP`8lHN+, aW nE NFv65^,n]*B3*б_^4PD88@$h05$3XA%/=E/ZP2gy(b'h3+"gy`=]2nq0 J Vol輛lzʀ1jmXKV8h'k4m*U0WW["t*S!eټ09 Єi&|{f?2}i3QE_gyfX{ U23c561DwϓlhyBlq"Pa\<#P >g2=fmtN ~R_+^E~7^d {IF 3Gys@X)vvl< cݶa;xCl"8! (Xwf9Ya1@n i<IOŝ<>yxsUXff> r@Df* f6Y^XZP;'4o* ̲+CoŐs'8Ca+Nsxu*f ^l \Og?+Rʸw>e](S]]oǒ+ni*@;pM0O6ER$%˾դDQ2E2CN 6IsstWW;F]iyȍ1r_.;K6)'BF+1DhN;Y[޲4 NcTN`gWzYLͪr2Ŀ1E3[7jtF{iv@FC(V:0)uwQFau3Cϋڛ8A--pqQee]f[R9xK=i$;tB{/8(ojҿ}S9#j }`J!FnI4N^kϝ 4{(\=zM,'׵,C)Jp]t2xGE>yh왱L岎MH$aj64r&F$QGy%e]tJ"]۫dπRM6Tt@<4oDqVE2b䁴0H;2k6w0}W܊?V<ü\g,1QZo\d(ɒU2dw ǮZ# h=(){40ȕ~5wAaR_ck&Y3Xr. uLLT>^}|4#m8k3"VYP.#ic>&;lyˮҨG'+z:^ Xe\NH)˜*lIt$7LH!SpzͫhQOsي3))4S֕)w.Rlnh(J GU{<e5 bi 'kXo`QG)pOX#DăDM gMc(R`)mVz⻗]KZgsKK u6 [ e[24:iG}H)_UfZ3az(rۑuR\Eg xzêe<6G]"GiZq 4V(u푁Ktֺ瑠9 Jძ(h^\VD YBKD0tm{YbEI^X\٦r'v.p}#I8o_tcqTvW\]µA$EAy^cBr -Uo9r=-:h!P _q<ʓe [ Vcb^[*t0Eqx )=MH6Tv BY(mG*+Q8û+:J.ž3#3I}[ZiuˠT+'"E'WRb_{@ 4.l!] k+Ahpp"W`@hD)" hd ǼI8ms{՝YlSΕZpQCBVQHԝh(0;Z{8\O)g t_(p=&[6K ́h%.k^wIlgJu0I2(5'Je,Y "Ą1*qGW4 :$ 1XwIrZgs7%gRɾKts]5}gjz-^ 'M2 x-!>'䣽HTl^wɗ"x2l[fl)4 d.Rr Ll1J9A>]bj` 96eݠ+<%{M}C-/nwn8X2lx,/LQ"//gls+>{/1zAyuvx[hM.6dQ,L@9y- S@tvlsWS.,=ݰ8K(W ~ &kj%>j쀁z+lLH*w]'C}/<$^7\qY}ŀq$U޳Lݨ$CKPpQoU ,Eޟ-*%V,/lyhȈ( ZL!@p7(L{$P]SoD΅-}cwɣdna դfC u+EdIW{I8$"fDփCd)&~Kh,@݉RRQrnxw爡Hyոvu멎b|:wTKY @|ᮻk@+QeIA۞6|n1[LKoTs`!lR'/6\5ϧ@ 5i MR,;(`(Xx;{zM}IEcrPmD@CDL_`ߵ[W- jh"[bu~\%EX `o9`TPٞo_I-҆ jfjgldѳ !w>[MQ%\@J ^/O a\->_'1G/q22W7X'#'.M NAZ9&!D/EJ;%4Yy?N0Cz_= nUxpͭŘ n/Ξ *v|73*/S߃զ9Qj7%]˯::G5Ru8inz!6VY-5Y $XuTH=Cz ?hR~=&t&8m"\p # ď1}N-~ ?a0EK`&nΗ:0,uE6 , eXu󆅐 s$zX^W6tB k W9Tυ[,^ɬ Z/ʢ8#KHE;]&z!eJ1H\3:l_/O%w3G)!7W (} z'QdWsWY5 7-:Yj&=, ]"!/ )7(sy%fMNaL:ѐ͜ hbVw{ܨm~+|'XgZvLޔR=y闞>, X[CxZ /zC6%n5h/ GIi7*M`K{5Ek "W :ؚk™,$#g9 ׇw?nx(J0%FIMi&VH b mSve$TKIS$h^;&Ll|QURc<*6 u~-U[% (JkA@Twsza|@֦.zbD[JaM٧GGsVVד#xoG؟;LG0r"NL{fkV%#ӂ:9'߲4r}5qF-IZm?˳d?vO_7h\s}G5۟]cF7RWu0\ً=Զf:E10t/ VR4e獳cD8OuScP7FStF%SE-H$?{֑d /;3ئQ AL0fpO6E$߷+.M&e"#}NuWWŋ0XjxӜN"u3>h=hpI٪$ _[_4 ٞ0@`^%lFxIxt ņ$c*+V}sӣ1ĢA&S5.ƀ"iD90Ub] Ňg # nm :6|]TΞ (4 &yb17n-u]t^@6:ISZ1'Ŵ pr6X{[RR?=rֶbR>0&ϗ.2$Ĕy\3y*)+CX"%zS wuy6gKWjCmJ8'o?@{4hAvLxjZa!.Fzb@  ߼z!.O8yC#xCHlZY"cB HZV(e\2q\Y).%[s|8|"5/`Cj}"z,,+d!DmGY̨Oڀ|ۗ(;Fh V,}j's=K;6^T+(R&v!q66*aKTu,~-ty=?af8pM;avm-4d>^EΜĠI%Ðĭ5Q*s&d FF|٣W|eBK鹓B' R{1̋VsiȩVՂIEtFBS2|8jqԎX|#d5 vV¥Xuh}YzLJO"TRGRyy WUQ`MTUMd]'#[ۅ~ÛӕNSl2e8\ZyCdS3OFnuo0MYڥ.!6(9jsl¼ĵ BNJ@ɄS&YS$PNބĴ RJaqI@ڈ.NWhrJdUe($m#R)-=&  z-; \vHArŨQIw9 ԐImz %;iT0p)`j3;C !SA'S2J{!4YxLcf\SR?N# 6] }8/lUawAxY!y3@ ,,Y$+浫Bx7]\KvkoEf_rߵ2}| h;Zek-NT$ڀ[6{W:;*n%Ax((kSeJ6;Sc^'Wg>HuHkqwa9A}tJ _Q.-fYW([;mï;9:͹y l[ \H[AJJP-؞ Z?T8q2ڪy=%&1^[/Im7Lrz}ČM"X6vW,J "CYv%"}(B]ӭoDjn5"~3ᗘw4Mޟ]7s ARV2CY%]vvQVj}fWJZrWmS&Jt7bWxlxvjkPo&l|ZF}pԔw0vm{u˾U+5AIW`?;za2W繻~/44+讞2vi+)ɨקѶS0. d/媈_I|hivq"WG>ؓ`EsөjW5,sG5IΓxgm!gRdJv,ebLFZ?#vآ<׼:ҷ~qY>~uUM~ubέƽ00q/Q/d({?ӛUoƏ>K;Ac˩+R Ci)GMI7E}Ut a崞bCßQK"B6Ԗtp,ƶN^mV70>,Wʫ[7yxEՈV5hɿO MК(ڰlX8D'NDs>mCKI1 APk/%Ɏ}j<8ː[|?۟_ݭn|u5˴z$S\;V%CVI P TԑYr(p׃㔿,W> !_-y3lGy߾]zMO_Stז#Cݺpwd˂UĘ`Ub8mpHmx\g-FI%ƺcsD:_bW@p޿rdPݜB~784 )1wy|5{0Lq/rfϛ[R;~8~Vs*b|=ᖶ' qE}߼Iw֕5gB Y_ɨDk#Ⱥbtuyͳkse,߃UNjz4Mk{Eo',W)4Σ9݉ŢJHߏ yۻƑ$>B۠[rއQ ӽU_FL4#KnU[XHR)r%)5tD%3̸4)Ȣ$HH\;5`3ѥZGOtL/Ps +=_}> ͎|]kP+:{ 湄}kO1! !W>fX,W}n=Nf7Uɜ휁~=Ak$Ld]ff?2+F $ai<nF!=7`_f0[Xx )8U|o8<Iլoܠ[9(|.A!Otғ#yf)@sE;bT U 3(;s`(lTlp DvZ#Ef􉪱}iRto1(R Qa+J]&bkhH@{o_GE)%V`[5@ݥKH;b4=*Rbu;m$XbD|6͓L1-_&`8>=B j.ZOLBl\Qt\_|b)W i>kE8ۿ BgS!WYSEPxK*6.s=_ guw1tÿzarX03؄>}G>ss7x2#閫Q |̮C_fi j $'o3/(L>7vU e[΢A>A2b $G@'j>|l]S|Bᴈ)E:5W~ hyw=9Eȉ@VaV"D>?E zԢ yb rNJCS_wM;>68䅳hOa$OpڗnTt uQKf MY!-VtkC^8'oqK7O">+ }G)Bձ[xOpmSCe; Q1]$cɇK~ӓ7Ǣ|F~ԋV1\pEJmlbpIb4F4E+F"}X*wf{.K6!x )BĈ4)8V!V"LjQ #L:Lu"Lj#Ӵò A^xsFC+I]`.I%֭Icw^fg՝d+JyA=?ZQXHOȬ͟Q@q @Ľ~ft0WPƵ e&3P5vǺ#= y\ۅ|u{"{Υ^A../+W3LZe.` vf`=\^Ft͓o Uƻly[{ݾnhqV^{*_U{\OW(34EZp ҃9G Qlt~ _dI~e1x!aoю+Oю┞iCsm(u^(I\B6 N*tTy JOg -?U ^D\Upo#\gQgcv@'~Fi7!-T\e27=b>cZR>6#l@:Ěh2"T8N8\BjYjCwGRDb#H9#iExڄZ * k,WSE BKWRhU|}+}HD1j?U|RGLWmܔ`O1º߃Ko'ʇ5")!q4Dce*%DN6J-d"ΙL_ՄPڶL /k A* 2&q1j$F16rc$! #-1id0:Pzb(ǒB3a84VP J4g M嘢2~5sc8Ҳ/Y\.Z! +ȑ{xu$˰8X#1"D+sEYr̲(BIGB0!H!2sHsfﮧ kK6]ͷQ+l$o*$/o\1+ru~(Ƀty.ڛ;8u0wC!2*? 3DbP:fӚR6׃ۋ\;}Tqjma섃,1qX!<7P_f0[Z $yhJh5Aj8)zArP($T%{؝`&9od7-PkX7/Y<@,Ɯ ~~N@[W{O/`\|P}yb`xr]d6_tvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006530000515146276477017717 0ustar rootrootFeb 21 06:47:03 crc systemd[1]: Starting Kubernetes Kubelet... Feb 21 06:47:03 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 21 06:47:04 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 21 06:47:05 crc kubenswrapper[4820]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.415999 4820 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423295 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423331 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423344 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423356 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423368 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423380 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423391 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423403 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423413 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423429 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423442 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423454 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423467 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423477 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423489 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423501 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423512 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423526 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423537 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423549 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423558 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423566 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423575 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423583 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423592 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423601 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423609 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423618 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423626 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423638 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423648 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423658 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423668 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423677 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423686 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423694 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423702 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423711 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423721 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423732 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423740 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423748 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423756 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423765 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423773 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423781 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423789 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423798 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423807 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423816 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423827 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423837 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423846 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423855 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423863 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423871 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423884 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423893 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423902 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423912 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423922 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423930 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423939 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423947 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423956 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423966 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423975 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423985 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.423993 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.424003 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.424012 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424164 4820 flags.go:64] FLAG: --address="0.0.0.0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424181 4820 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424200 4820 flags.go:64] FLAG: --anonymous-auth="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424213 4820 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424225 4820 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424265 4820 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424279 4820 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424291 4820 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424301 4820 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424311 4820 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424321 4820 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424331 4820 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424342 4820 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424352 4820 flags.go:64] FLAG: --cgroup-root="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424363 4820 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424373 4820 flags.go:64] FLAG: --client-ca-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424382 4820 flags.go:64] FLAG: --cloud-config="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424392 4820 flags.go:64] FLAG: --cloud-provider="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424401 4820 flags.go:64] FLAG: --cluster-dns="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424414 4820 flags.go:64] FLAG: --cluster-domain="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424423 4820 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424434 4820 flags.go:64] FLAG: --config-dir="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424443 4820 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424453 4820 flags.go:64] FLAG: --container-log-max-files="5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424466 4820 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424475 4820 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424485 4820 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424495 4820 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424505 4820 flags.go:64] FLAG: --contention-profiling="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424515 4820 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424525 4820 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424535 4820 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424546 4820 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424558 4820 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424567 4820 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424577 4820 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424587 4820 flags.go:64] FLAG: --enable-load-reader="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424596 4820 flags.go:64] FLAG: --enable-server="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424606 4820 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424617 4820 flags.go:64] FLAG: --event-burst="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424628 4820 flags.go:64] FLAG: --event-qps="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424637 4820 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424647 4820 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424656 4820 flags.go:64] FLAG: --eviction-hard="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424668 4820 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424678 4820 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424687 4820 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424696 4820 flags.go:64] FLAG: --eviction-soft="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424706 4820 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424715 4820 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424726 4820 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424735 4820 flags.go:64] FLAG: --experimental-mounter-path="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424744 4820 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424754 4820 flags.go:64] FLAG: --fail-swap-on="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424763 4820 flags.go:64] FLAG: --feature-gates="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424776 4820 flags.go:64] FLAG: --file-check-frequency="20s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424785 4820 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424795 4820 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424804 4820 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424814 4820 flags.go:64] FLAG: --healthz-port="10248" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424824 4820 flags.go:64] FLAG: --help="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424834 4820 flags.go:64] FLAG: --hostname-override="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424847 4820 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424857 4820 flags.go:64] FLAG: --http-check-frequency="20s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424866 4820 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424876 4820 flags.go:64] FLAG: --image-credential-provider-config="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424885 4820 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424896 4820 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424906 4820 flags.go:64] FLAG: --image-service-endpoint="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424916 4820 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424925 4820 flags.go:64] FLAG: --kube-api-burst="100" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424935 4820 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424945 4820 flags.go:64] FLAG: --kube-api-qps="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424954 4820 flags.go:64] FLAG: --kube-reserved="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424963 4820 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424972 4820 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424983 4820 flags.go:64] FLAG: --kubelet-cgroups="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.424992 4820 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425001 4820 flags.go:64] FLAG: --lock-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425011 4820 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425020 4820 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425030 4820 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425044 4820 flags.go:64] FLAG: --log-json-split-stream="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425054 4820 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425063 4820 flags.go:64] FLAG: --log-text-split-stream="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425072 4820 flags.go:64] FLAG: --logging-format="text" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425082 4820 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425092 4820 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425102 4820 flags.go:64] FLAG: --manifest-url="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425111 4820 flags.go:64] FLAG: --manifest-url-header="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425123 4820 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425133 4820 flags.go:64] FLAG: --max-open-files="1000000" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425145 4820 flags.go:64] FLAG: --max-pods="110" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425154 4820 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425165 4820 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425176 4820 flags.go:64] FLAG: --memory-manager-policy="None" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425185 4820 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425196 4820 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425205 4820 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425215 4820 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425262 4820 flags.go:64] FLAG: --node-status-max-images="50" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425272 4820 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425283 4820 flags.go:64] FLAG: --oom-score-adj="-999" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425293 4820 flags.go:64] FLAG: --pod-cidr="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425304 4820 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425318 4820 flags.go:64] FLAG: --pod-manifest-path="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425327 4820 flags.go:64] FLAG: --pod-max-pids="-1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425338 4820 flags.go:64] FLAG: --pods-per-core="0" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425348 4820 flags.go:64] FLAG: --port="10250" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425358 4820 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425368 4820 flags.go:64] FLAG: --provider-id="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425378 4820 flags.go:64] FLAG: --qos-reserved="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425387 4820 flags.go:64] FLAG: --read-only-port="10255" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425398 4820 flags.go:64] FLAG: --register-node="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425408 4820 flags.go:64] FLAG: --register-schedulable="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425418 4820 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425433 4820 flags.go:64] FLAG: --registry-burst="10" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425443 4820 flags.go:64] FLAG: --registry-qps="5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425453 4820 flags.go:64] FLAG: --reserved-cpus="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425464 4820 flags.go:64] FLAG: --reserved-memory="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425476 4820 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425486 4820 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425496 4820 flags.go:64] FLAG: --rotate-certificates="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425506 4820 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425516 4820 flags.go:64] FLAG: --runonce="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425526 4820 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425536 4820 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425547 4820 flags.go:64] FLAG: --seccomp-default="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425557 4820 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425567 4820 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425577 4820 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425587 4820 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425597 4820 flags.go:64] FLAG: --storage-driver-password="root" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425607 4820 flags.go:64] FLAG: --storage-driver-secure="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425617 4820 flags.go:64] FLAG: --storage-driver-table="stats" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425627 4820 flags.go:64] FLAG: --storage-driver-user="root" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425636 4820 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425647 4820 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425656 4820 flags.go:64] FLAG: --system-cgroups="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425667 4820 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425683 4820 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425692 4820 flags.go:64] FLAG: --tls-cert-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425701 4820 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425714 4820 flags.go:64] FLAG: --tls-min-version="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425724 4820 flags.go:64] FLAG: --tls-private-key-file="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425733 4820 flags.go:64] FLAG: --topology-manager-policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425743 4820 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425752 4820 flags.go:64] FLAG: --topology-manager-scope="container" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425762 4820 flags.go:64] FLAG: --v="2" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425773 4820 flags.go:64] FLAG: --version="false" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425785 4820 flags.go:64] FLAG: --vmodule="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425796 4820 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.425806 4820 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426079 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426093 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426102 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426111 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426120 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426129 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426139 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426148 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426158 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426166 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426175 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426183 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426192 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426200 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426208 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426217 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426225 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426233 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426267 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426276 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426284 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426292 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426301 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426317 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426326 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426334 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426342 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426350 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426362 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426372 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426383 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426393 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426405 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426416 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426459 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426472 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426487 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426500 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426513 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426526 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426536 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426544 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426553 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426562 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426570 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426578 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426587 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426595 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426603 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426612 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426620 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426628 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426637 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426646 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426665 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426680 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426689 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426700 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426710 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426734 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426745 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426753 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426765 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426775 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426786 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426795 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426805 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426814 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426822 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426831 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.426840 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.426864 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.443324 4820 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.443369 4820 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443507 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443519 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443529 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443542 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443567 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443577 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443587 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443596 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443605 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443615 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443623 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443665 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443676 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443687 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443696 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443705 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443713 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443721 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443729 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443738 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443747 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443755 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443763 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443771 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443780 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443789 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443797 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443806 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443814 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443822 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443831 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443842 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443853 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443862 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443873 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443883 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443891 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443901 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443910 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443920 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443930 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443941 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443953 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443964 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443976 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.443991 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444005 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444019 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444031 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444042 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444051 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444059 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444068 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444076 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444085 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444093 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444101 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444110 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444118 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444126 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444135 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444143 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444152 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444160 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444169 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444177 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444186 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444194 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444202 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444210 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444220 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.444263 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444555 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444574 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444588 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444601 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444612 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444622 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444634 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444649 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444661 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444673 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444686 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444697 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444707 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444718 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444728 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444738 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444749 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444761 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444772 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444784 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444794 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444804 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444814 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444824 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444835 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444846 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444856 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444866 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444876 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444889 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444902 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444914 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444925 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444936 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444949 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444960 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444972 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444982 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.444993 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445004 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445015 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445025 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445035 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445046 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445056 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445068 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445082 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445095 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445105 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445117 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445129 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445140 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445152 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445163 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445174 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445329 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445344 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445354 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445365 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445375 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445385 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445396 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445407 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445417 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445428 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445438 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445449 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445459 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445470 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445480 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.445494 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.445511 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.446830 4820 server.go:940] "Client rotation is on, will bootstrap in background" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.459476 4820 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.459666 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.462024 4820 server.go:997] "Starting client certificate rotation" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.462080 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.464718 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 09:43:31.263871876 +0000 UTC Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.464884 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.495982 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.500877 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.504922 4820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.524037 4820 log.go:25] "Validated CRI v1 runtime API" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.567629 4820 log.go:25] "Validated CRI v1 image API" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.570285 4820 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.576965 4820 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-21-06-42-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.577017 4820 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.595981 4820 manager.go:217] Machine: {Timestamp:2026-02-21 06:47:05.593073984 +0000 UTC m=+0.626158222 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ec2c7a4f-4f2f-4567-9af1-65fc234d8f80 BootID:e79a2b5c-f808-4b7b-b373-103b6d673828 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:56:f4:28 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:56:f4:28 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:59:86:fd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e9:ea:1f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ee:9f:82 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2f:b3:1f Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b4:57:9b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:b3:2c:d2:fa:b6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:01:18:f8:bc:c2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.596382 4820 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.596528 4820 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.597747 4820 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.598676 4820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599165 4820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599866 4820 topology_manager.go:138] "Creating topology manager with none policy" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.599878 4820 container_manager_linux.go:303] "Creating device plugin manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600470 4820 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600509 4820 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.600728 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.601141 4820 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606385 4820 kubelet.go:418] "Attempting to sync node with API server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606413 4820 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606432 4820 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606449 4820 kubelet.go:324] "Adding apiserver pod source" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.606472 4820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.612950 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.612933 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.613146 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.613158 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.613428 4820 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.614891 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.616881 4820 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618645 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618692 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618709 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618724 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618745 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618759 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618773 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618794 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618811 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618826 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618844 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.618858 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.620657 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.621336 4820 server.go:1280] "Started kubelet" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622613 4820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622731 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.622857 4820 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 21 06:47:05 crc systemd[1]: Started Kubernetes Kubelet. Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.623457 4820 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.624946 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.624995 4820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625040 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:59:32.282021173 +0000 UTC Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625194 4820 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625226 4820 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.625362 4820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.625363 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.625977 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.626056 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626685 4820 server.go:460] "Adding debug handlers to kubelet server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626697 4820 factory.go:153] Registering CRI-O factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626821 4820 factory.go:221] Registration of the crio container factory successfully Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626914 4820 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626935 4820 factory.go:55] Registering systemd factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626947 4820 factory.go:221] Registration of the systemd container factory successfully Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.626983 4820 factory.go:103] Registering Raw factory Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.627005 4820 manager.go:1196] Started watching for new ooms in manager Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.627249 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.628078 4820 manager.go:319] Starting recovery of all containers Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.630612 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18963021e9321342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:47:05.621295938 +0000 UTC m=+0.654380166,LastTimestamp:2026-02-21 06:47:05.621295938 +0000 UTC m=+0.654380166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646191 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646398 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646411 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646422 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646431 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.646439 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648246 4820 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648312 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648323 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648336 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648351 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648365 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648377 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648386 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648398 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648408 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648421 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648428 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648436 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648444 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648452 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648460 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648468 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648478 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648487 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648497 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648506 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648540 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648549 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648560 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648568 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648577 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648592 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648603 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648613 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648622 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648631 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648642 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648651 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648671 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648681 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648691 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648702 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648713 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648723 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648733 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648753 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648773 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648794 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648807 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648823 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648835 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648886 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648899 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648911 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648924 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648935 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648946 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648956 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.648967 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649007 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649018 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649029 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649040 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649050 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649060 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649071 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649080 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649091 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649100 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649110 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649131 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649141 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649151 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649162 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649172 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649181 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649190 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649200 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649209 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649219 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649228 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649253 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649264 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649290 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649299 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649309 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649319 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649330 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649344 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649366 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649424 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649436 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649449 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649464 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649477 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649489 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649498 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649511 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649522 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649531 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649542 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649551 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649562 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649577 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649589 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649599 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649610 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649620 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649630 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649643 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649653 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649664 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649674 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649685 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649694 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649722 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649732 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649754 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649764 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649799 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649808 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649818 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649828 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649838 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649848 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649860 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649870 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649880 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649891 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649907 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649918 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649955 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649966 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649976 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649985 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.649995 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650006 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650017 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650028 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650038 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650051 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650061 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650070 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650081 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650092 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650104 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650115 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650126 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650136 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650145 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650154 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650166 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650177 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650188 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650198 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650208 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650217 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650227 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650258 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650270 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650283 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650294 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650305 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650315 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650326 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650339 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650352 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650363 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650377 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650389 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650401 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650413 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650426 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650437 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650446 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650457 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650468 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650478 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650488 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650498 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650507 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650518 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650528 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650541 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650551 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650561 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650570 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650579 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650588 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650598 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650607 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650629 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650640 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650650 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650666 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650677 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650691 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650701 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650711 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650721 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650732 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650742 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650754 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650777 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650791 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650802 4820 reconstruct.go:97] "Volume reconstruction finished" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.650811 4820 reconciler.go:26] "Reconciler: start to sync state" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.659976 4820 manager.go:324] Recovery completed Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.680282 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683906 4820 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683938 4820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.683968 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.692309 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695299 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695368 4820 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.695423 4820 kubelet.go:2335] "Starting kubelet main sync loop" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.695509 4820 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 21 06:47:05 crc kubenswrapper[4820]: W0221 06:47:05.697823 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.697992 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.707223 4820 policy_none.go:49] "None policy: Start" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.708545 4820 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.708585 4820 state_mem.go:35] "Initializing new in-memory state store" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.725585 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763219 4820 manager.go:334] "Starting Device Plugin manager" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763293 4820 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.763337 4820 server.go:79] "Starting device plugin registration server" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764037 4820 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764093 4820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764328 4820 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764450 4820 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.764470 4820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.772328 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.796591 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.796699 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798617 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798808 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.798888 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800414 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.800572 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801474 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.801745 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802469 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.802579 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.803008 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.805987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.805995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.806256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.807549 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.808741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.828408 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852271 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852349 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852559 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852746 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852770 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.852862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.864292 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.865806 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:05 crc kubenswrapper[4820]: E0221 06:47:05.866337 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954357 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954495 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954577 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954769 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:05 crc kubenswrapper[4820]: I0221 06:47:05.954823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.067288 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.069148 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.069924 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.143906 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.159095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.169374 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.199476 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.206646 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2 WatchSource:0}: Error finding container da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2: Status 404 returned error can't find the container with id da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2 Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.208374 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8 WatchSource:0}: Error finding container bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8: Status 404 returned error can't find the container with id bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8 Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.210748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.217704 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239 WatchSource:0}: Error finding container a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239: Status 404 returned error can't find the container with id a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239 Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.225848 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e WatchSource:0}: Error finding container ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e: Status 404 returned error can't find the container with id ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.229831 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.238657 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837 WatchSource:0}: Error finding container 4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837: Status 404 returned error can't find the container with id 4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837 Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.470300 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.471668 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.472342 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.624411 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.625443 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:35:08.309466497 +0000 UTC Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.701803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4356aaaf2366d8537375af947a6348d17f94fe09f711e85b04378475e77c4837"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.703524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ea6691f20617cdaf4a358b75a75dbfe19a707406daf14a975073611266346f7e"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.705404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a24610a1c1a1fc78062203199ef3c970b8fd8661ac92796d796dd46f3b6c9239"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.707258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd0abe4b188836e3c1980548206556710f4ef5a7592e2a303077351a17fb8aa8"} Feb 21 06:47:06 crc kubenswrapper[4820]: I0221 06:47:06.708590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"da5c6f427571486481381c8437f6eb521f56a314e2aa5272089a4846562f23f2"} Feb 21 06:47:06 crc kubenswrapper[4820]: W0221 06:47:06.859000 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:06 crc kubenswrapper[4820]: E0221 06:47:06.859330 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.000451 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.000545 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.030568 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.056344 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.056427 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: W0221 06:47:07.070669 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.070901 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.273218 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.275717 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.276427 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.521970 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:07 crc kubenswrapper[4820]: E0221 06:47:07.523301 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.624512 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.625663 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:59:23.26955843 +0000 UTC Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712371 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712452 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.712474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.713352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.716875 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.718547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.719567 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.719732 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.720853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.722868 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d939ddef7c34f71808d30ff7720850717a4199e4ea4819f5499040b68c80903" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.722971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d939ddef7c34f71808d30ff7720850717a4199e4ea4819f5499040b68c80903"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.723027 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.725971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.726663 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.728966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.729023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.729046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730296 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4" exitCode=0 Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4"} Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.730448 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:07 crc kubenswrapper[4820]: I0221 06:47:07.732381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.196874 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.624223 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.626264 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:14:41.001274719 +0000 UTC Feb 21 06:47:08 crc kubenswrapper[4820]: E0221 06:47:08.631756 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.733973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.734076 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.735143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738231 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738278 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.738360 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.739131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740668 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aaf7562373015648060c40542c1d56ffebf82fbf72137a679b9bad32eca02126" exitCode=0 Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aaf7562373015648060c40542c1d56ffebf82fbf72137a679b9bad32eca02126"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.740804 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.741538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743736 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743776 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.743716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e"} Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.749956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.750083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.750101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.877287 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:08 crc kubenswrapper[4820]: I0221 06:47:08.878520 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:08 crc kubenswrapper[4820]: E0221 06:47:08.879050 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.354506 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.626617 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:46:33.72634295 +0000 UTC Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749150 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="79304ceda3b0c42e04de9bcaaa0aebb6dc0b6c2e659f8a7aecae0478eaccb23e" exitCode=0 Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749286 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749297 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749325 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749348 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749402 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749435 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749513 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.749756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"79304ceda3b0c42e04de9bcaaa0aebb6dc0b6c2e659f8a7aecae0478eaccb23e"} Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.751558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:09 crc kubenswrapper[4820]: I0221 06:47:09.752542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.626772 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:47:47.220441323 +0000 UTC Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.757628 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d66121d9b9a4a8e1e35a08932ab77167bea6664ba299d44ac1aa1b387d631e9b"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"544001f28a6a7bcbc04077600b5db500bfe354da92376c5d8fbeb514da8d163a"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e23741c116b20774f6b21bc77a91cd8506f8c10b81c704733c941ae0d8cec77"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"012a5c9a54a954a3807cb00fa356acfd255ce1ca4b456e6c6061caeb33c66d52"} Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:10 crc kubenswrapper[4820]: I0221 06:47:10.758710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:10.999986 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.000148 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.000188 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.001380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.627473 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:32:06.046233741 +0000 UTC Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.644898 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.702616 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"43d80be6c691e1caf02784f2a9617100c9d819907346bf869b267c7a6e0a5a23"} Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766575 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766663 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.766687 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:11 crc kubenswrapper[4820]: I0221 06:47:11.768462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.079214 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.080978 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.354526 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.354628 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.628277 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:49:18.39079604 +0000 UTC Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.768843 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:12 crc kubenswrapper[4820]: I0221 06:47:12.769721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:13 crc kubenswrapper[4820]: I0221 06:47:13.629355 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:14:22.546759408 +0000 UTC Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.189595 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.190398 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.191718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.441764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.441926 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.443055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.489758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.489908 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.491094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:14 crc kubenswrapper[4820]: I0221 06:47:14.630146 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:07:00.897194449 +0000 UTC Feb 21 06:47:15 crc kubenswrapper[4820]: I0221 06:47:15.631006 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:24:42.600688496 +0000 UTC Feb 21 06:47:15 crc kubenswrapper[4820]: E0221 06:47:15.772649 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.106747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.106966 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.108261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:16 crc kubenswrapper[4820]: I0221 06:47:16.631761 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:49:51.443426862 +0000 UTC Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.591431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.591688 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.593588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.599740 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.632385 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:56:04.895825752 +0000 UTC Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.780487 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.781453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:17 crc kubenswrapper[4820]: I0221 06:47:17.787695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.632582 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:37:03.7575183 +0000 UTC Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.783688 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:18 crc kubenswrapper[4820]: I0221 06:47:18.784933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.482140 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.482315 4820 trace.go:236] Trace[232516160]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.480) (total time: 10001ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[232516160]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.482) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[232516160]: [10.001947768s] [10.001947768s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.482355 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.566789 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.566906 4820 trace.go:236] Trace[1886156297]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.565) (total time: 10001ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[1886156297]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.566) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[1886156297]: [10.001257169s] [10.001257169s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.566943 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.625391 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.633616 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:47:03.642941269 +0000 UTC Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.788792 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791138 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" exitCode=255 Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19"} Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.791387 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.792469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.793550 4820 scope.go:117] "RemoveContainer" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" Feb 21 06:47:19 crc kubenswrapper[4820]: W0221 06:47:19.906768 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:19 crc kubenswrapper[4820]: I0221 06:47:19.906847 4820 trace.go:236] Trace[2009007637]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:09.904) (total time: 10002ms): Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[2009007637]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:19.906) Feb 21 06:47:19 crc kubenswrapper[4820]: Trace[2009007637]: [10.002032812s] [10.002032812s] END Feb 21 06:47:19 crc kubenswrapper[4820]: E0221 06:47:19.906871 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.054046 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: W0221 06:47:20.057474 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.057561 4820 trace.go:236] Trace[436150591]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Feb-2026 06:47:10.056) (total time: 10001ms): Feb 21 06:47:20 crc kubenswrapper[4820]: Trace[436150591]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:20.057) Feb 21 06:47:20 crc kubenswrapper[4820]: Trace[436150591]: [10.001331171s] [10.001331171s] END Feb 21 06:47:20 crc kubenswrapper[4820]: E0221 06:47:20.057582 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.402045 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.402225 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.403349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.448201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.634215 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:26:17.941324162 +0000 UTC Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.652730 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.652788 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.656329 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.656384 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.795060 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff"} Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796643 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.796643 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.797683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:20 crc kubenswrapper[4820]: I0221 06:47:20.808315 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.005581 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]log ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]etcd ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-filter ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiextensions-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-apiextensions-controllers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/crd-informer-synced ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-system-namespaces-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 21 06:47:21 crc kubenswrapper[4820]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/bootstrap-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/start-kube-aggregator-informers ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-registration-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-discovery-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]autoregister-completion ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapi-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 21 06:47:21 crc kubenswrapper[4820]: livez check failed Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.005648 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.635021 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:50:38.864133032 +0000 UTC Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799399 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799517 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.799589 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:21 crc kubenswrapper[4820]: I0221 06:47:21.800878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.356390 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.356566 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.635613 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:51:02.612235175 +0000 UTC Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.802337 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:22 crc kubenswrapper[4820]: I0221 06:47:22.803546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:23 crc kubenswrapper[4820]: I0221 06:47:23.635748 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:50:27.343365319 +0000 UTC Feb 21 06:47:24 crc kubenswrapper[4820]: I0221 06:47:24.215284 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:24 crc kubenswrapper[4820]: I0221 06:47:24.636368 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:26:41.20591746 +0000 UTC Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.632232 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.636537 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:11:35.487678871 +0000 UTC Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.637464 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.637578 4820 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.653255 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.673932 4820 csr.go:261] certificate signing request csr-b45vm is approved, waiting to be issued Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.687565 4820 csr.go:257] certificate signing request csr-b45vm is issued Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.728512 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:25 crc kubenswrapper[4820]: E0221 06:47:25.772955 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 21 06:47:25 crc kubenswrapper[4820]: I0221 06:47:25.860457 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.004682 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.007763 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.418037 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.620384 4820 apiserver.go:52] "Watching apiserver" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.623727 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.624625 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.625302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.625920 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626041 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626473 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.626525 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626612 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.626763 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.627813 4820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.630952 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.631308 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632433 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632497 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632630 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632640 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.632951 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.633075 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.633291 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.636877 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:26:29.996853087 +0000 UTC Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643118 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643256 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643403 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643478 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643508 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643624 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643687 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643718 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643784 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644048 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644115 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644364 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644384 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644428 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644460 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644639 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645059 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645223 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645282 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645316 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645385 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645419 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645481 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645701 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645742 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643702 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.643917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644546 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644614 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644698 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.644951 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646100 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646177 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646205 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646256 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646448 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646511 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646625 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646641 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646664 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647068 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.647428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.647561 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.147537333 +0000 UTC m=+22.180621641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645972 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645762 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.645714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.648904 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649815 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.649973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.650180 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.650389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.646676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654824 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.654982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655056 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655265 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655429 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.655647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.656442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.656975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657204 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657213 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657457 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.657748 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.658867 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.659968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660069 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660133 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660556 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660626 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660845 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660910 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.660974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661092 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661316 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661714 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661811 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661842 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661911 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662015 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662048 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662219 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662275 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662310 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662480 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664018 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664451 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664991 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665023 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665216 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665274 4820 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665294 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665316 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665335 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665355 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665375 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665393 4820 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665414 4820 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665434 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665454 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665473 4820 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665493 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665511 4820 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665532 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665551 4820 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665570 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665592 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665610 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665629 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665649 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665667 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665687 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665705 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665724 4820 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665743 4820 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665764 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665784 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665804 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665824 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665842 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665860 4820 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665879 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665897 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665916 4820 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666043 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666064 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666302 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666326 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666346 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666365 4820 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666384 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666403 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666695 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666719 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666738 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666757 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666777 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666796 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666818 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666838 4820 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666860 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666879 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666899 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666917 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666938 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666957 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666975 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.661469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662733 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.662871 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663088 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.663465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664420 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.664929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665135 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.665279 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.666859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.667568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.668288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.668368 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669921 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669946 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670270 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.669113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.670110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671482 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.671522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672305 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672358 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672552 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672728 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.672999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673127 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673196 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673738 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673780 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673914 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.673916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674704 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.674887 4820 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675123 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675331 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.175301671 +0000 UTC m=+22.208385909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.675372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.675919 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.676046 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.176033431 +0000 UTC m=+22.209117859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.676053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.676819 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.687364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.687503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.689834 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-21 06:42:25 +0000 UTC, rotation deadline is 2026-12-24 17:02:21.362591598 +0000 UTC Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.689859 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7354h14m54.672735576s for next certificate rotation Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690016 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690035 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690050 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.690310 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.190271395 +0000 UTC m=+22.223355593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.690648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.692533 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.693697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.693787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694267 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694389 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694489 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.694687 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:27.194672046 +0000 UTC m=+22.227756244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.694774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695183 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695222 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695234 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.696040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.696090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695536 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695579 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.695643 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.697355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698888 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.698961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.699471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.700600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701485 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.701975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.704176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.704200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.705446 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.706577 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.707342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708346 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.708944 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709812 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.709919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710721 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.710839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.711328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.711978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.712471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.712604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721701 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.721761 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.725684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.741915 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.743049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.745480 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.746633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.746749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.753023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.761689 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768353 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768597 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768613 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768629 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768643 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768679 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768690 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768699 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768707 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768718 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768727 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768745 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768754 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768763 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768771 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768780 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768788 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768797 4820 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768806 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768814 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768822 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768834 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768846 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768862 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768874 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768884 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768895 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768905 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768914 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768922 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768930 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768938 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768946 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768964 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768972 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768980 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768988 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.768996 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769004 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769012 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769020 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769028 4820 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769039 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769049 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769060 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769070 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769081 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769091 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769101 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769113 4820 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769125 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769138 4820 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769149 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769162 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769174 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769187 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769198 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769210 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769222 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769255 4820 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769268 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769279 4820 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769292 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769303 4820 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769313 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769324 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769336 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769348 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769362 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769374 4820 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769385 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769399 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769410 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769422 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769437 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769448 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769461 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769472 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769483 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769495 4820 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769506 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769516 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769527 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769538 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769549 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769559 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769570 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769578 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769587 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769602 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769612 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769634 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769645 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769656 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769666 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769676 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769686 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769699 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769710 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769721 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769732 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769745 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769758 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769769 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769780 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769793 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769805 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769816 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769828 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769840 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769854 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769869 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769881 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769892 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769903 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769912 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769920 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769928 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769937 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769947 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769955 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769964 4820 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769973 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769981 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769990 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.769999 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770008 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770024 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770032 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770040 4820 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770049 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.770978 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.816444 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.817147 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822756 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" exitCode=255 Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff"} Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.822843 4820 scope.go:117] "RemoveContainer" containerID="3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.832848 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.833121 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:26 crc kubenswrapper[4820]: E0221 06:47:26.833428 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.838953 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.849181 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.857725 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.867781 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.878435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.887719 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.895604 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.959606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 21 06:47:26 crc kubenswrapper[4820]: I0221 06:47:26.980326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 21 06:47:26 crc kubenswrapper[4820]: W0221 06:47:26.992790 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507 WatchSource:0}: Error finding container 383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507: Status 404 returned error can't find the container with id 383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022268 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022535 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tv4k8"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.022905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.025565 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.027266 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.029932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.039490 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.048778 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.057918 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.061680 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69 WatchSource:0}: Error finding container 83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69: Status 404 returned error can't find the container with id 83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.067066 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.072132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.072189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.074990 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.083483 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.091804 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.099091 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.172986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.173161 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.173138572 +0000 UTC m=+23.206222770 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-hosts-file\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.173272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.190362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhl67\" (UniqueName: \"kubernetes.io/projected/80b29fd0-922f-41c6-8ff4-dfa111ff89ad-kube-api-access-fhl67\") pod \"node-resolver-tv4k8\" (UID: \"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\") " pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273862 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273900 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.273916 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.273994 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274034881 +0000 UTC m=+23.307119079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274058 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274094 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274112 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274127 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274139 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274150 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274198 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274182655 +0000 UTC m=+23.307266853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274099 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274272 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274210606 +0000 UTC m=+23.307294834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.274350 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:28.274333609 +0000 UTC m=+23.307417847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.346218 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tv4k8" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.359467 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b29fd0_922f_41c6_8ff4_dfa111ff89ad.slice/crio-1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af WatchSource:0}: Error finding container 1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af: Status 404 returned error can't find the container with id 1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.463368 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xpb8z"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464029 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qth8z"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.464361 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.467792 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-94gxr"] Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.468084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486225 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486310 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486225 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486640 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486651 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486728 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486757 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.486844 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.489692 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.497153 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.508209 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.520544 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.532324 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.541200 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.549074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.556219 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575140 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575545 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575610 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575626 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575728 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575883 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.575972 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.576125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.584253 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.595649 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.605577 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.613075 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.621448 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.629304 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.635495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.637217 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:58:22.827471178 +0000 UTC Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.643158 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.650825 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.658929 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.667792 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676540 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676781 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-rootfs\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-system-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-multus\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.676971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-socket-dir-parent\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677115 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677224 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-cni-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-os-release\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-system-cni-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677651 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-hostroot\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677662 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-kubelet\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-os-release\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-multus-certs\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677834 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-var-lib-cni-bin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-conf-dir\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-cnibin\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.677929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-etc-kubernetes\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-cnibin\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-k8s-cni-cncf-io\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/abdb469c-ba72-4790-9ce3-785f4facbcb9-host-run-netns\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.678151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/086516d1-6ffd-4d1f-b222-898336aa9960-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.690458 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9d0e0f3d3f0e3b844b7680af76e328d504082820fc88fac20a347210641f19\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:19Z\\\",\\\"message\\\":\\\"W0221 06:47:08.762585 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0221 06:47:08.762958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771656428 cert, and key in /tmp/serving-cert-2158094551/serving-signer.crt, /tmp/serving-cert-2158094551/serving-signer.key\\\\nI0221 06:47:09.074439 1 observer_polling.go:159] Starting file observer\\\\nW0221 06:47:09.077196 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0221 06:47:09.077311 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:09.078843 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2158094551/tls.crt::/tmp/serving-cert-2158094551/tls.key\\\\\\\"\\\\nF0221 06:47:19.635416 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.698114 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.698260 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.700905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-proxy-tls\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-mcd-auth-proxy-config\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-cni-binary-copy\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.701725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/abdb469c-ba72-4790-9ce3-785f4facbcb9-multus-daemon-config\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.702768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bf7\" (UniqueName: \"kubernetes.io/projected/abdb469c-ba72-4790-9ce3-785f4facbcb9-kube-api-access-56bf7\") pod \"multus-94gxr\" (UID: \"abdb469c-ba72-4790-9ce3-785f4facbcb9\") " pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.704167 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/086516d1-6ffd-4d1f-b222-898336aa9960-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.704424 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.705104 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzcs9\" (UniqueName: \"kubernetes.io/projected/ce38546e-524f-4801-8ee1-b4bb9d6c6dff-kube-api-access-hzcs9\") pod \"machine-config-daemon-qth8z\" (UID: \"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\") " pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706411 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.706639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654bx\" (UniqueName: \"kubernetes.io/projected/086516d1-6ffd-4d1f-b222-898336aa9960-kube-api-access-654bx\") pod \"multus-additional-cni-plugins-xpb8z\" (UID: \"086516d1-6ffd-4d1f-b222-898336aa9960\") " pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.707003 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.708005 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.708499 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.709040 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.709925 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.710516 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.711400 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.711954 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.713036 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.713551 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.714045 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.714929 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.715472 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.716385 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.716770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.717362 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.718280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.718711 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.719817 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.720275 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.721328 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.721770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.722364 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.723438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.723926 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.724799 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.725307 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.726124 4820 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.726221 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.727803 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.728709 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.729097 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.730484 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.731094 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.732047 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.732715 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.733770 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.734317 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.735387 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.735975 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.736912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.737732 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.738538 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.739393 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.740354 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.742085 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.742688 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.743202 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.744339 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.744990 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.745949 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.783104 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.793284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.807090 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3 WatchSource:0}: Error finding container f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3: Status 404 returned error can't find the container with id f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.812772 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-94gxr" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.826044 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.833005 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:27 crc kubenswrapper[4820]: E0221 06:47:27.833142 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.835112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f6a9924100d73e4f6951786a3f88e371d993d13bb4b321ad414de4c0a6c13cc3"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836026 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:47:27 crc kubenswrapper[4820]: W0221 06:47:27.836303 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabdb469c_ba72_4790_9ce3_785f4facbcb9.slice/crio-6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39 WatchSource:0}: Error finding container 6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39: Status 404 returned error can't find the container with id 6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39 Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.836984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tv4k8" event={"ID":"80b29fd0-922f-41c6-8ff4-dfa111ff89ad","Type":"ContainerStarted","Data":"0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.837026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tv4k8" event={"ID":"80b29fd0-922f-41c6-8ff4-dfa111ff89ad","Type":"ContainerStarted","Data":"1a09dfd8073022b97a60bf77860a5997b00d7e255e0a376af9d713480cbcd0af"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838572 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838859 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.838912 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.840280 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.840286 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.841139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"83938517d3f8a4ee7cfca71119836bdafde5e357cfa2c200e74042c9ba01fd69"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.843689 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.846804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"383a3383e4ac18d58791ca4b1c85320a91a1de8d9c4b57bafc3d7cd8f8fd3507"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.848183 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.848223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"72f690b7f8a99b99dc23e54a5cfdf8fe886c8872cc1a26dea16211d9cfdf1eb5"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.849147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerStarted","Data":"fe06c43f986174ea48f11afd7404f82f08745440e74b79dc022d8bf8b69f92a8"} Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.854801 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.865323 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.877192 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879264 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879527 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879545 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879721 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879755 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.879772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.888502 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.896458 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.905944 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.917590 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.928027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.972700 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.980948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982080 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982217 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.982385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.981059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.983331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.984062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.986901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.996446 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:27 crc kubenswrapper[4820]: I0221 06:47:27.997706 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"ovnkube-node-bvfjp\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.016836 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.027412 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.040382 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.052146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.062919 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.074516 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.084424 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.095230 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.107163 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.130562 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.161815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.171975 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: W0221 06:47:28.172349 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda70ec449_ba11_47dd_a60c_f77993670045.slice/crio-118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c WatchSource:0}: Error finding container 118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c: Status 404 returned error can't find the container with id 118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.183070 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.183262 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.183222222 +0000 UTC m=+25.216306430 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.284472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284576 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284566 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284619 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.284607474 +0000 UTC m=+25.317691672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284650 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.284630914 +0000 UTC m=+25.317715132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284700 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284736 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284743 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284752 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284763 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284766 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284802 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.28479281 +0000 UTC m=+25.317877018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.284837 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:30.2848141 +0000 UTC m=+25.317898338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.638218 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:46:54.049177276 +0000 UTC Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.695835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.695924 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.695983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:28 crc kubenswrapper[4820]: E0221 06:47:28.696104 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.853762 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297" exitCode=0 Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.854049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.855996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.856033 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"6a01634402318ce17912e5065fb692381aafefea9a0f268b2bfb05a6d5931f39"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.857273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.857323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860047 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" exitCode=0 Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.860102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c"} Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.873827 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:28 crc kubenswrapper[4820]: I0221 06:47:28.897059 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.105862 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:28Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.136441 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.166555 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.188474 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.200740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.213190 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.234125 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.265211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.300901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.318366 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.330809 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.341931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.355818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.358801 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.362860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.366115 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.368571 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.380985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.395504 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.412552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.427589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.439603 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.450781 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.467017 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.478032 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.488967 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.500901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.510318 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.524465 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.538740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.553678 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.567181 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.584003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.597492 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.614018 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.630725 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.638925 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:04:24.29901921 +0000 UTC Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.646547 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.678684 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:29 crc kubenswrapper[4820]: E0221 06:47:29.696228 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865850 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.865895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.867206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.869135 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f" exitCode=0 Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.869624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f"} Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.882969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.900566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.916677 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.929937 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.941393 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.964860 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:29 crc kubenswrapper[4820]: I0221 06:47:29.992023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:29Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.020623 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.032186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.038407 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t5qxz"] Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.038820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.053939 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.054431 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.054588 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.062513 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.082986 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.102273 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.118433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.122461 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.151613 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.191367 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.219653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.219775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.219848 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.219808346 +0000 UTC m=+29.252892584 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.220201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8767767-a460-416a-b2c2-82a8d9eebb1e-host\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.221084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8767767-a460-416a-b2c2-82a8d9eebb1e-serviceca\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.241842 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.257224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlct7\" (UniqueName: \"kubernetes.io/projected/c8767767-a460-416a-b2c2-82a8d9eebb1e-kube-api-access-nlct7\") pod \"node-ca-t5qxz\" (UID: \"c8767767-a460-416a-b2c2-82a8d9eebb1e\") " pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.289608 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321057 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.321143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321210 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321234 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321267 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321275 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321290 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321298 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321280141 +0000 UTC m=+29.354364359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321321 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321306331 +0000 UTC m=+29.354390529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321341 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321331802 +0000 UTC m=+29.354416130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321430 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321484 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321509 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.321602 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:34.321572649 +0000 UTC m=+29.354656897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.333086 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.350218 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t5qxz" Feb 21 06:47:30 crc kubenswrapper[4820]: W0221 06:47:30.364433 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8767767_a460_416a_b2c2_82a8d9eebb1e.slice/crio-65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5 WatchSource:0}: Error finding container 65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5: Status 404 returned error can't find the container with id 65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5 Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.375003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.414391 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.449996 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.497402 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.544498 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.570383 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.611000 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.639719 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:39:25.193539505 +0000 UTC Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.654906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.693893 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.696207 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.696280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.696429 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:30 crc kubenswrapper[4820]: E0221 06:47:30.696509 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.731948 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.773449 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.811224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.850234 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.872901 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t5qxz" event={"ID":"c8767767-a460-416a-b2c2-82a8d9eebb1e","Type":"ContainerStarted","Data":"c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.872950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t5qxz" event={"ID":"c8767767-a460-416a-b2c2-82a8d9eebb1e","Type":"ContainerStarted","Data":"65cac19bdb72b02141f171b0435a2e6dc14680c460a151e48e555d057a7bf1f5"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.875554 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b" exitCode=0 Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.875631 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b"} Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.890399 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.935917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:30 crc kubenswrapper[4820]: I0221 06:47:30.971709 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:30Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.012330 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.050969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.091074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.128582 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.170030 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.214756 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.249515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.291994 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.333685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.374818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.415277 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.454180 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.492149 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.531648 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.572802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.614475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.640592 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:15:13.405520813 +0000 UTC Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.669480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.690869 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.696211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:31 crc kubenswrapper[4820]: E0221 06:47:31.696442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.737050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.777231 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.812033 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.857732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.880518 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88" exitCode=0 Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.880600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88"} Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.895405 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.932093 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:31 crc kubenswrapper[4820]: I0221 06:47:31.968913 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:31Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.010761 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.038011 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.040869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.041682 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.055138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.103870 4820 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.104130 4820 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.105133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.117859 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.121451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.128830 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.135928 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.139594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.155958 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.160514 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.175186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.177192 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.181497 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.194625 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.195139 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.197470 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.213289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.254558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.289719 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.298990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.299045 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.329626 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.370469 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.402128 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.410068 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.457100 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.492327 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.504987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.536364 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.571438 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.607654 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.641706 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:00:09.941735033 +0000 UTC Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.696347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.696383 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.696471 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:32 crc kubenswrapper[4820]: E0221 06:47:32.702986 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.713269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.816870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.888895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.892972 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f" exitCode=0 Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.893020 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.905909 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.918267 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.920344 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:32Z","lastTransitionTime":"2026-02-21T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.929804 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.942611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.953190 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.969282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.980249 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:32 crc kubenswrapper[4820]: I0221 06:47:32.994705 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:32Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.009576 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.017907 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.024811 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.029847 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.053434 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.093539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.128508 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.132487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.230670 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.333297 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.436807 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.539805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.641937 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:06:10.735137349 +0000 UTC Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.643422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.696343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:33 crc kubenswrapper[4820]: E0221 06:47:33.696532 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.746819 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.849340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.900388 4820 generic.go:334] "Generic (PLEG): container finished" podID="086516d1-6ffd-4d1f-b222-898336aa9960" containerID="c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef" exitCode=0 Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.900440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerDied","Data":"c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.925608 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.946624 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.956733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:33Z","lastTransitionTime":"2026-02-21T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.960806 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.971805 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.982380 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:33 crc kubenswrapper[4820]: I0221 06:47:33.993631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:33Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.006444 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.017073 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.027885 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.038665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.049926 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.058905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.066873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.083924 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.102006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.161137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.258943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.259122 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.259098377 +0000 UTC m=+37.292182575 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.263554 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.360369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360480 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360503 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360515 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360555 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360542732 +0000 UTC m=+37.393626920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360480 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360588 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360634 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360685 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360695 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360673485 +0000 UTC m=+37.393757773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360705 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360720 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360712206 +0000 UTC m=+37.393796524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.360797 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.360770588 +0000 UTC m=+37.393854806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.365976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.366056 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.467762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.570592 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.642672 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:28:29.382271658 +0000 UTC Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.672799 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.696056 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.696095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.696150 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:34 crc kubenswrapper[4820]: E0221 06:47:34.696267 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.775457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.878376 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.908573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" event={"ID":"086516d1-6ffd-4d1f-b222-898336aa9960","Type":"ContainerStarted","Data":"17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.913557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.913912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.914108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.925168 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.943165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.944127 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.950706 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.961407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.972685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.980978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981038 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.981072 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:34Z","lastTransitionTime":"2026-02-21T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.984371 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:34 crc kubenswrapper[4820]: I0221 06:47:34.997319 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.008451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.020103 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.030358 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.040385 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.054960 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.068024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.083214 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.084249 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.096811 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.106481 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.116534 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.127180 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.138536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.149348 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.159033 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.168426 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.178516 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.186820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.191492 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.202459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.219302 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.231390 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.242543 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.258317 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.289451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.391926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.463300 4820 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.494137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.596411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.643470 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:22:42.855842123 +0000 UTC Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.696176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:35 crc kubenswrapper[4820]: E0221 06:47:35.696336 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.698299 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.708633 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.720164 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.732617 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.743027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.752142 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.762287 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.776815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.788818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.799969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.800609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.811746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.824345 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.842133 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.868999 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.883951 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.902411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:35Z","lastTransitionTime":"2026-02-21T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:35 crc kubenswrapper[4820]: I0221 06:47:35.915518 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.005357 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.107658 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.209831 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.312834 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.415171 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.517594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.620141 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.644278 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:42:45.785721922 +0000 UTC Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.695783 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:36 crc kubenswrapper[4820]: E0221 06:47:36.695898 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.695789 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:36 crc kubenswrapper[4820]: E0221 06:47:36.695958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.722384 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825108 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.825192 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.919823 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/0.log" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.922917 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" exitCode=1 Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.922948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.924072 4820 scope.go:117] "RemoveContainer" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927191 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.927200 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:36Z","lastTransitionTime":"2026-02-21T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.951870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.964346 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.973957 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.983411 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:36 crc kubenswrapper[4820]: I0221 06:47:36.996035 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:36Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.008561 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.020611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.030641 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.032102 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.047928 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.065376 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.085184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.100497 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.120736 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:36Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 06:47:36.845043 6158 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 06:47:36.845162 6158 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 06:47:36.845174 6158 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 06:47:36.845198 6158 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 06:47:36.845216 6158 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 06:47:36.845256 6158 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 06:47:36.845264 6158 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 06:47:36.845283 6158 factory.go:656] Stopping watch factory\\\\nI0221 06:47:36.845295 6158 ovnkube.go:599] Stopped ovnkube\\\\nI0221 06:47:36.845314 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 06:47:36.845323 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 06:47:36.845328 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 06:47:36.845333 6158 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 06:47:36.845339 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0221 06:47:36.845343 6158 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.131815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.133443 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.236202 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.338342 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.440375 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.543334 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.644415 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:40:33.554525045 +0000 UTC Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.645721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.696348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:37 crc kubenswrapper[4820]: E0221 06:47:37.696658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.748068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.850325 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.931932 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.932665 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/0.log" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935691 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" exitCode=1 Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.935837 4820 scope.go:117] "RemoveContainer" containerID="01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.936940 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:37 crc kubenswrapper[4820]: E0221 06:47:37.937178 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.952540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:37Z","lastTransitionTime":"2026-02-21T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.957616 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:37 crc kubenswrapper[4820]: I0221 06:47:37.986631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01093065f6dc3436137926b0b32d5970ac02a7ee7e569a8e908e7a96e3d2e5fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:36Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0221 06:47:36.845043 6158 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0221 06:47:36.845162 6158 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0221 06:47:36.845174 6158 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0221 06:47:36.845198 6158 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0221 06:47:36.845216 6158 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0221 06:47:36.845256 6158 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0221 06:47:36.845264 6158 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0221 06:47:36.845283 6158 factory.go:656] Stopping watch factory\\\\nI0221 06:47:36.845295 6158 ovnkube.go:599] Stopped ovnkube\\\\nI0221 06:47:36.845314 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0221 06:47:36.845323 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0221 06:47:36.845328 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0221 06:47:36.845333 6158 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0221 06:47:36.845339 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0221 06:47:36.845343 6158 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0221 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:37.999977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.012156 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.023007 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.038563 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.055137 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.055990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.056120 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.070581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.084030 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.096870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.113290 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.134378 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.148148 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.158960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.159098 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.166807 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.262386 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.366584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.469941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.470037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.572891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.644819 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:46:41.898892268 +0000 UTC Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.676398 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.696099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.696384 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.696546 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.778357 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.880419 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.944545 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.950335 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:38 crc kubenswrapper[4820]: E0221 06:47:38.950595 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.965397 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.979065 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.983412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:38Z","lastTransitionTime":"2026-02-21T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:38 crc kubenswrapper[4820]: I0221 06:47:38.998068 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.014407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.031883 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.051539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.069226 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.086693 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.090569 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.109223 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.126726 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.141384 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.154350 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.181143 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.188966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.189049 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.193869 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.291515 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.393885 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.496108 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.598693 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.645595 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:10:34.64466607 +0000 UTC Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.696134 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:39 crc kubenswrapper[4820]: E0221 06:47:39.696391 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.700465 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.802696 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904466 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:39 crc kubenswrapper[4820]: I0221 06:47:39.904500 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:39Z","lastTransitionTime":"2026-02-21T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.006971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.109676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.212550 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.315773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.397809 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7"] Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.398293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.401321 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.401722 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.417722 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.419350 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.421787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.421955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.422071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.422277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.432602 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.443348 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.454209 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.467312 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.487875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.506379 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.519932 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.520856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523065 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.523167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.524033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.524142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d837134d-9746-4fda-af7c-acf3077a61c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.537052 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.541002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8646\" (UniqueName: \"kubernetes.io/projected/d837134d-9746-4fda-af7c-acf3077a61c7-kube-api-access-b8646\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.552184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.570533 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.574316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d837134d-9746-4fda-af7c-acf3077a61c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hbb7\" (UID: \"d837134d-9746-4fda-af7c-acf3077a61c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.583066 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.594835 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.610772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:40Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622875 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622890 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.622901 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.646336 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:37:30.057418656 +0000 UTC Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.696637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.696661 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:40 crc kubenswrapper[4820]: E0221 06:47:40.696765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:40 crc kubenswrapper[4820]: E0221 06:47:40.696830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.716654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.725522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: W0221 06:47:40.730062 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd837134d_9746_4fda_af7c_acf3077a61c7.slice/crio-71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54 WatchSource:0}: Error finding container 71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54: Status 404 returned error can't find the container with id 71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54 Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.827535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.929885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.930586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:40Z","lastTransitionTime":"2026-02-21T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.958833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1"} Feb 21 06:47:40 crc kubenswrapper[4820]: I0221 06:47:40.958875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"71fa951932e261142376c1fde3bed0730777151885ec85190f24629f64ad5d54"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.033195 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.135494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.238770 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.341748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.444175 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.511882 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.512678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.512799 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.532479 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546950 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546936 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.546985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.547136 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.564552 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.580328 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.594164 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.618384 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632263 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632831 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.632923 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.647356 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:35:33.082698991 +0000 UTC Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.647576 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.649302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.662965 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.673275 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.687049 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.695745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.695866 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.703774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.716809 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.731659 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.733784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.733822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.734082 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:41 crc kubenswrapper[4820]: E0221 06:47:41.734126 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:42.234114317 +0000 UTC m=+37.267198515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.747225 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751206 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.751911 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tf29\" (UniqueName: \"kubernetes.io/projected/a4537dd3-6e3b-481a-9f90-668020b5558b-kube-api-access-6tf29\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.759965 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.853731 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.955673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:41Z","lastTransitionTime":"2026-02-21T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.963297 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" event={"ID":"d837134d-9746-4fda-af7c-acf3077a61c7","Type":"ContainerStarted","Data":"82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02"} Feb 21 06:47:41 crc kubenswrapper[4820]: I0221 06:47:41.979772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.000455 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:41Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.012320 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.024316 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.034508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.046475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.059348 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.063005 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.077141 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.092299 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.103631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.114108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.123696 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.145011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.158435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.161180 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.173038 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.184824 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.238475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.238667 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.238869 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:43.238761287 +0000 UTC m=+38.271845515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.264133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.294597 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.308585 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.313686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.333314 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.337887 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.339125 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.339387 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.339359307 +0000 UTC m=+53.372443545 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.355025 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.359969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.360074 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.377126 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.381729 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.400101 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.400360 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.402416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.440641 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440733 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440745 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440772 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440792 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440808 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440819 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440798972 +0000 UTC m=+53.473883200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440905 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440885144 +0000 UTC m=+53.473969382 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440927 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.440914965 +0000 UTC m=+53.473999203 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.440975 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441033 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441061 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.441156 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:47:58.44113069 +0000 UTC m=+53.474214918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.505598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.608598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.647824 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:29:10.070003584 +0000 UTC Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696168 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.696209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696471 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696558 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:42 crc kubenswrapper[4820]: E0221 06:47:42.696658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.697225 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.710729 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.813652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.915968 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:42Z","lastTransitionTime":"2026-02-21T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.967804 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.970075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52"} Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.970621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:42 crc kubenswrapper[4820]: I0221 06:47:42.985736 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:42Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.004175 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.016540 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.018828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.034513 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.052566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.069870 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.120934 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.121199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.141906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.160305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.176198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.188581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.209631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.224982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225044 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.225873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.239429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.250186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.250353 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.250433 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:45.25040735 +0000 UTC m=+40.283491568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.258019 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.277387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:43Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.328765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.430859 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.533626 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.636165 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.648183 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:39:25.127462827 +0000 UTC Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.695754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:43 crc kubenswrapper[4820]: E0221 06:47:43.695896 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.738824 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.841652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:43 crc kubenswrapper[4820]: I0221 06:47:43.944576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:43Z","lastTransitionTime":"2026-02-21T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.047429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.150522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.253127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.355888 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.458934 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.561990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.562071 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.598418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.599501 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.599747 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.648759 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:47:25.735301785 +0000 UTC Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.668905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.695797 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.695946 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.696054 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:44 crc kubenswrapper[4820]: E0221 06:47:44.696122 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.771999 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.874880 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:44 crc kubenswrapper[4820]: I0221 06:47:44.977539 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:44Z","lastTransitionTime":"2026-02-21T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.079956 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.182697 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.272789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.272961 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.273023 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:49.273005757 +0000 UTC m=+44.306089965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.284851 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.387891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.490303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.593374 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.649186 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:58:21.018246772 +0000 UTC Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.695850 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.695972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.696090 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: E0221 06:47:45.696107 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.713298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.731994 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.749849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.766060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.783491 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.798551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.806125 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.821349 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.838822 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.854172 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.874375 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.894515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.900730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:45Z","lastTransitionTime":"2026-02-21T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.914387 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.926212 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.936911 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.950146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:45 crc kubenswrapper[4820]: I0221 06:47:45.980459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.003433 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.106603 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.209891 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314916 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.314966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.418594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.521982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.522000 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.624668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.650328 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:00:15.967530985 +0000 UTC Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.696584 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.696925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.696899 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:46 crc kubenswrapper[4820]: E0221 06:47:46.697013 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.728992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.729003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.831905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.832289 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:46 crc kubenswrapper[4820]: I0221 06:47:46.936491 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:46Z","lastTransitionTime":"2026-02-21T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.038974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.039115 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.141982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.142006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.142023 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.246292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.349539 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.451764 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.554262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.650615 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:30:34.233894909 +0000 UTC Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.656935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.656990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.657041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.696652 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:47 crc kubenswrapper[4820]: E0221 06:47:47.696930 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.759173 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.861555 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:47 crc kubenswrapper[4820]: I0221 06:47:47.966981 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:47Z","lastTransitionTime":"2026-02-21T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.069351 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.171923 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.274668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.377892 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.480842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.583966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.584087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.651132 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:19:02.234532979 +0000 UTC Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.686947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.686995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.687041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.696624 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.696712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.696916 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:48 crc kubenswrapper[4820]: E0221 06:47:48.697088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.790431 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.894193 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.995949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.995996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:48 crc kubenswrapper[4820]: I0221 06:47:48.996037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:48Z","lastTransitionTime":"2026-02-21T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.099946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.100275 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203093 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.203141 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.306338 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.317157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.317461 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.317654 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:47:57.317620126 +0000 UTC m=+52.350704364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.409950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.512682 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.615909 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.651822 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:23:46.64606798 +0000 UTC Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.696743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:49 crc kubenswrapper[4820]: E0221 06:47:49.696921 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.718315 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.820998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.821015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.821029 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.923970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:49 crc kubenswrapper[4820]: I0221 06:47:49.924075 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:49Z","lastTransitionTime":"2026-02-21T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.029396 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.135979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.136002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.239607 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.342940 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.446537 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.549693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.550535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.652877 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:32:22.721195384 +0000 UTC Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.654794 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.696567 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.696138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.697217 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:50 crc kubenswrapper[4820]: E0221 06:47:50.697562 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.757512 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.861473 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965627 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:50 crc kubenswrapper[4820]: I0221 06:47:50.965708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:50Z","lastTransitionTime":"2026-02-21T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067825 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.067842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.170900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.170982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.171050 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273896 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.273982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.274006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.274023 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.377459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.479966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.480065 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.582991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.583009 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.653145 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:54:16.354922521 +0000 UTC Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.685762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.696327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:51 crc kubenswrapper[4820]: E0221 06:47:51.696507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.788583 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.892478 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:51 crc kubenswrapper[4820]: I0221 06:47:51.995893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:51Z","lastTransitionTime":"2026-02-21T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.099413 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.203163 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306411 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.306422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.408810 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.512719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.616967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.617002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.653554 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:19:17.049017446 +0000 UTC Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.695962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696133 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.695961 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.696297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696489 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.696633 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.720127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.729901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.730185 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.745313 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.749992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.750013 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.765221 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.769910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.770326 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.789766 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.794460 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.811719 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.814994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.815088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.828932 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:52Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:52 crc kubenswrapper[4820]: E0221 06:47:52.829081 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.830831 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:52 crc kubenswrapper[4820]: I0221 06:47:52.933696 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:52Z","lastTransitionTime":"2026-02-21T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.037303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.140939 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.244713 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.347911 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.451533 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.554931 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.654757 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:40:04.181766418 +0000 UTC Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.657608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.696193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:53 crc kubenswrapper[4820]: E0221 06:47:53.696417 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.761384 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.864894 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:53 crc kubenswrapper[4820]: I0221 06:47:53.967932 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:53Z","lastTransitionTime":"2026-02-21T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.071936 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.175158 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.278973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.279069 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.382700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.484888 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.494607 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.507663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.527475 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.544873 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.561167 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.574486 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.587035 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.588580 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.601130 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.611301 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.622876 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.638100 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.650874 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.656523 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:24:09.176452722 +0000 UTC Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.664417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.682654 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.690540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.696029 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696177 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696267 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:54 crc kubenswrapper[4820]: E0221 06:47:54.696319 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.697767 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.707863 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.718882 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:54Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.792923 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.895366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.997951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.998023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:54 crc kubenswrapper[4820]: I0221 06:47:54.998105 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:54Z","lastTransitionTime":"2026-02-21T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.100635 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.202959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.203039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.304994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.305009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.305020 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.407495 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.509886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.510064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.510265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.612649 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.657785 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:12:33.137277364 +0000 UTC Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.695793 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:55 crc kubenswrapper[4820]: E0221 06:47:55.695913 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.711409 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.715488 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.726508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.740966 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.754611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.771255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.782977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.793286 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.803536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.813594 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.818078 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.824226 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.834392 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.846481 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.855861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.865655 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.876971 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.888989 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.919976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:55 crc kubenswrapper[4820]: I0221 06:47:55.920119 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:55Z","lastTransitionTime":"2026-02-21T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.022595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.110639 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.117084 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.120757 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.123998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.124052 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.129506 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.141398 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.152658 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.163635 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.176439 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.190201 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.201694 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.212765 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.224402 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.226262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.238599 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.252292 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.270293 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.283568 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.293886 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.304077 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:56Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.329780 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432214 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.432316 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.534998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.535008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.637633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.658842 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:50:26.819825434 +0000 UTC Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696042 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.696699 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:56 crc kubenswrapper[4820]: E0221 06:47:56.696725 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.742097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.844668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:56 crc kubenswrapper[4820]: I0221 06:47:56.946544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:56Z","lastTransitionTime":"2026-02-21T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.017221 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.019549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.020040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.034765 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.049349 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.050447 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.068853 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.083968 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.099369 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.119531 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.129161 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.137916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.148528 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.150982 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.160260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.172938 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.185663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.198487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.212543 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.227305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.238646 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.250343 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.252742 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354911 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.354921 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.403597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.403745 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.403811 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:13.403794149 +0000 UTC m=+68.436878347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.456775 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.558868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.559540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.659378 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:42:05.786308811 +0000 UTC Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.661779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.696563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:57 crc kubenswrapper[4820]: E0221 06:47:57.696887 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.764468 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.866765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:57 crc kubenswrapper[4820]: I0221 06:47:57.969269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:57Z","lastTransitionTime":"2026-02-21T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.025094 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.025836 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/1.log" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028834 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" exitCode=1 Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.028921 4820 scope.go:117] "RemoveContainer" containerID="c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.030123 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.030442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.048432 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.059471 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.072296 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.075257 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.089864 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.100462 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.112742 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.133773 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86d4d64351b3ba58d7444015079b454f7815fe7df26426278890878d94d20a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:37Z\\\",\\\"message\\\":\\\" failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:37Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:37.779490 6277 services_controller.go:356] Processing sync for service openshift-oauth-apiserver/api for network=default\\\\nI0221 06:47:37.779541 6277 services_controller.go:434] Service openshift-console/console retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{console openshift-console da99054c-338b-4216-8e73-72be1a1258a4 12350 0 2025-02-23 05:39:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: cons\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.145092 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.155174 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.167040 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.174803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.182875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.201060 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.212924 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.226701 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.237470 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.248653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.259023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.276992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.277008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.277016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.379369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.414926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.415072 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.415045061 +0000 UTC m=+85.448129259 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.481361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516420 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516477 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516460944 +0000 UTC m=+85.549545162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.516419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516552 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516574 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516611 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516631 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516605338 +0000 UTC m=+85.549689586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516632 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516701 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.51668018 +0000 UTC m=+85.549764418 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516785 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516829 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516850 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.516949 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:48:30.516917137 +0000 UTC m=+85.550001375 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.584258 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.660814 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:47:53.429659726 +0000 UTC Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.687511 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.695863 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.695710 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.696042 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:47:58 crc kubenswrapper[4820]: E0221 06:47:58.696187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.790383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.892979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.893059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:58 crc kubenswrapper[4820]: I0221 06:47:58.995314 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:58Z","lastTransitionTime":"2026-02-21T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.033967 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.037735 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:47:59 crc kubenswrapper[4820]: E0221 06:47:59.037935 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.053117 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.065428 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.077146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.086687 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.095744 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.097192 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.112473 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.123109 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.137290 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.148252 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.160800 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.174659 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.190260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.200412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.204295 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.215184 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.224933 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.237361 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.253936 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:59Z is after 2025-08-24T17:21:41Z" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.303675 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.406912 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.509747 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.612491 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.661465 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:00:06.831484886 +0000 UTC Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.695894 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:47:59 crc kubenswrapper[4820]: E0221 06:47:59.696044 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.714916 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.817691 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:47:59 crc kubenswrapper[4820]: I0221 06:47:59.920652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:47:59Z","lastTransitionTime":"2026-02-21T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.023598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.125389 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.227686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.329956 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.433294 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.535715 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.638161 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.662291 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:33:58.855021547 +0000 UTC Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.695812 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.695885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.696023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:00 crc kubenswrapper[4820]: E0221 06:48:00.696138 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.740849 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843107 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.843148 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:00 crc kubenswrapper[4820]: I0221 06:48:00.946221 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:00Z","lastTransitionTime":"2026-02-21T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048871 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.048966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.151299 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.253673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355863 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.355893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.458946 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.561481 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.662440 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:42:49.145370183 +0000 UTC Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.664524 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.696426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:01 crc kubenswrapper[4820]: E0221 06:48:01.696584 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.766869 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.870132 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.971964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:01 crc kubenswrapper[4820]: I0221 06:48:01.972030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:01Z","lastTransitionTime":"2026-02-21T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.074971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.177506 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.281387 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.385331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.487446 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.590321 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.663560 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:32:20.986726129 +0000 UTC Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.693095 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.696632 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.696739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.697006 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:02 crc kubenswrapper[4820]: E0221 06:48:02.697100 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.795997 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.796066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.898953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:02 crc kubenswrapper[4820]: I0221 06:48:02.899034 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:02Z","lastTransitionTime":"2026-02-21T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.001569 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.104887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.105535 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141883 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.141938 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.184166 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.194761 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.216283 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.221995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.222087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.240719 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.245779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.263455 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.269178 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.291753 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:03Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.291986 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.294157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.396579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.499938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.500071 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.603361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.664130 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:41:31.406071964 +0000 UTC Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.696155 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:03 crc kubenswrapper[4820]: E0221 06:48:03.696310 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.705265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.808575 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:03 crc kubenswrapper[4820]: I0221 06:48:03.911178 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:03Z","lastTransitionTime":"2026-02-21T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.014867 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.118608 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.222318 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.325573 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.428202 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.531162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.634941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.635131 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.665356 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:12:21.318623435 +0000 UTC Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697141 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.696838 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:04 crc kubenswrapper[4820]: E0221 06:48:04.697210 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.738861 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.842945 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:04 crc kubenswrapper[4820]: I0221 06:48:04.945874 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:04Z","lastTransitionTime":"2026-02-21T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.049154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.152127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.253969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.254081 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.356179 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.459097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.566191 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.666194 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:44:50.639754373 +0000 UTC Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.669990 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.695726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:05 crc kubenswrapper[4820]: E0221 06:48:05.696020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.711878 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.734834 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.748004 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.763823 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.773480 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.781734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.796575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.807280 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.821474 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.842357 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.856905 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876681 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.876694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.877130 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.894121 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.915391 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.964408 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.978478 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:05Z","lastTransitionTime":"2026-02-21T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:05 crc kubenswrapper[4820]: I0221 06:48:05.986537 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:05Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.004802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:06Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.020884 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:06Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.082568 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186728 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.186778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290791 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.290851 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.394538 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497889 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.497947 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.601371 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.667128 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:33:05.232535451 +0000 UTC Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696661 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.696713 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.696574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.696856 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:06 crc kubenswrapper[4820]: E0221 06:48:06.697142 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.704900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.810298 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:06 crc kubenswrapper[4820]: I0221 06:48:06.914225 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:06Z","lastTransitionTime":"2026-02-21T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.016553 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.119332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.221660 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324133 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.324177 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.427745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.530871 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.632657 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.668078 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:03:20.991104798 +0000 UTC Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.696525 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:07 crc kubenswrapper[4820]: E0221 06:48:07.696653 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.735712 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.839230 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:07 crc kubenswrapper[4820]: I0221 06:48:07.942586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:07Z","lastTransitionTime":"2026-02-21T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.045529 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.147830 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.250481 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.352926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.454653 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.557417 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.660286 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.668833 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:37:33.114606098 +0000 UTC Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.696273 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696331 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696717 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:08 crc kubenswrapper[4820]: E0221 06:48:08.696752 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.762379 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.865925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.865988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.866046 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:08 crc kubenswrapper[4820]: I0221 06:48:08.968701 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:08Z","lastTransitionTime":"2026-02-21T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.070558 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.172734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.275466 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.379398 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.482191 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.585137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.668936 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:54:03.72796601 +0000 UTC Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.689444 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.695711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:09 crc kubenswrapper[4820]: E0221 06:48:09.695833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.791978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.894978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:09 crc kubenswrapper[4820]: I0221 06:48:09.997338 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:09Z","lastTransitionTime":"2026-02-21T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.099582 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.201714 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.303355 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.405718 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.508269 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.609933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.609993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.610043 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.669669 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:49:36.50189572 +0000 UTC Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696145 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696292 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696379 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.696384 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696667 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:10 crc kubenswrapper[4820]: E0221 06:48:10.696838 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712201 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.712211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.815584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:10 crc kubenswrapper[4820]: I0221 06:48:10.917704 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:10Z","lastTransitionTime":"2026-02-21T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.020926 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.122966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.225328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.327843 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.430585 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.534188 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.636595 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.670284 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:27:25.308896907 +0000 UTC Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.696637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:11 crc kubenswrapper[4820]: E0221 06:48:11.696779 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.739451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841338 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.841370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:11 crc kubenswrapper[4820]: I0221 06:48:11.943375 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:11Z","lastTransitionTime":"2026-02-21T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045379 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.045498 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.147820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.250950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.353276 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.455634 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.557331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659637 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.659673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.671288 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:44:55.181374255 +0000 UTC Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.696665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.696733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.696824 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.697188 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.697806 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:12 crc kubenswrapper[4820]: E0221 06:48:12.698072 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.762361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864722 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.864748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:12 crc kubenswrapper[4820]: I0221 06:48:12.967621 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:12Z","lastTransitionTime":"2026-02-21T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.070660 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172879 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.172987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.274898 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.376959 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.473751 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.484623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484759 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484815 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:48:45.484797715 +0000 UTC m=+100.517881923 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.484852 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.488784 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.499251 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.502229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.513204 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.515982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.516059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.531443 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.534677 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.546322 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:13Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.546432 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547608 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.547664 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.650517 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.672172 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:33:03.509682071 +0000 UTC Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.696727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:13 crc kubenswrapper[4820]: E0221 06:48:13.696846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.752985 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854895 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.854905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957318 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:13 crc kubenswrapper[4820]: I0221 06:48:13.957356 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:13Z","lastTransitionTime":"2026-02-21T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.059983 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.163229 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.265288 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.367610 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.469991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.470008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.470019 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.572639 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.672902 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:25:18.208493641 +0000 UTC Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.674495 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.695820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.695824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.696034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696119 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:14 crc kubenswrapper[4820]: E0221 06:48:14.696347 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.706118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.777127 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.879620 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:14 crc kubenswrapper[4820]: I0221 06:48:14.981694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:14Z","lastTransitionTime":"2026-02-21T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.084570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086385 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086413 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" exitCode=1 Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.086950 4820 scope.go:117] "RemoveContainer" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.097954 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.109975 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.120155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.133606 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.149138 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.158376 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.170011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.186957 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.190693 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.203639 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.213856 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.226266 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.235560 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.243763 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.253734 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.264632 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.274270 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.287207 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.288964 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.298525 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390873 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.390894 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.492793 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.595262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.673935 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:42:22.748935318 +0000 UTC Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.696519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:15 crc kubenswrapper[4820]: E0221 06:48:15.696635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.697976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.714326 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.726258 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.737361 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.748906 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.759224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.773522 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.789761 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.800528 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.802861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.814495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.826797 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.837396 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.849315 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.865478 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.877292 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.888108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.897589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.901948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.902312 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:15Z","lastTransitionTime":"2026-02-21T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.908620 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:15 crc kubenswrapper[4820]: I0221 06:48:15.918480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:15Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.004986 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.091669 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.091954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.106902 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.107659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.120076 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.131819 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.144204 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.155662 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.165790 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.176961 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.190101 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.204024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.209530 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.214319 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.227165 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.241566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.255316 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.269677 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.287098 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.300045 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.310833 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.311922 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.320305 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:16Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.413904 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.515544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.618232 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.674560 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:37:39.241883751 +0000 UTC Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696300 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696562 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.696557 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696694 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:16 crc kubenswrapper[4820]: E0221 06:48:16.696876 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.722234 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.825773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:16 crc kubenswrapper[4820]: I0221 06:48:16.927967 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:16Z","lastTransitionTime":"2026-02-21T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.030638 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132319 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.132346 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.235308 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.337992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.338009 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.439878 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.439979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.440042 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.541990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.542000 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.644640 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.674840 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:40:03.052126474 +0000 UTC Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.696662 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:17 crc kubenswrapper[4820]: E0221 06:48:17.697011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.747650 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.850512 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952768 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:17 crc kubenswrapper[4820]: I0221 06:48:17.952785 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:17Z","lastTransitionTime":"2026-02-21T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.055626 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.158708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.262694 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.364870 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.467403 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.569759 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.671749 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.675816 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:56:18.424829213 +0000 UTC Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696335 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.696380 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696467 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:18 crc kubenswrapper[4820]: E0221 06:48:18.696502 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.774122 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.876778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:18 crc kubenswrapper[4820]: I0221 06:48:18.979100 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:18Z","lastTransitionTime":"2026-02-21T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.081848 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.184154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.286430 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.389102 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.490955 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593890 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.593987 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.676357 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:50:55.983373237 +0000 UTC Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.695914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:19 crc kubenswrapper[4820]: E0221 06:48:19.696024 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.696463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.798546 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:19 crc kubenswrapper[4820]: I0221 06:48:19.901688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:19Z","lastTransitionTime":"2026-02-21T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.004637 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.106850 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.209688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.312927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.313065 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.416457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.518974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.519039 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.621526 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.676825 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:37:30.36137705 +0000 UTC Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696171 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696215 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696401 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.696265 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696516 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:20 crc kubenswrapper[4820]: E0221 06:48:20.696739 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.724267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.825849 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:20 crc kubenswrapper[4820]: I0221 06:48:20.928545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:20Z","lastTransitionTime":"2026-02-21T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.030557 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.131974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.132066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.234545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.336989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.337018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.337040 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.439383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541849 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.541896 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.644972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.677324 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:24:43.519180371 +0000 UTC Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.695601 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:21 crc kubenswrapper[4820]: E0221 06:48:21.695793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.746929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.746974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.747027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.848831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.849334 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:21 crc kubenswrapper[4820]: I0221 06:48:21.952170 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:21Z","lastTransitionTime":"2026-02-21T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.054984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.055063 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.157947 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.260522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.363477 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.466207 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569877 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.569928 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.672668 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.678309 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:21:58.44912321 +0000 UTC Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696422 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.696507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.696688 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.696814 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:22 crc kubenswrapper[4820]: E0221 06:48:22.697039 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.775208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.878505 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:22 crc kubenswrapper[4820]: I0221 06:48:22.981298 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:22Z","lastTransitionTime":"2026-02-21T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.083199 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.185894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.186068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.288992 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.289016 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.391311 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.493912 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596088 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.596166 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.643616 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.662177 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.666989 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.678647 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:05:17.20282179 +0000 UTC Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.680913 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.684807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.685182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.695985 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.696136 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.704167 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.707980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708039 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.708101 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.720956 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.725463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.740347 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:23Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:23 crc kubenswrapper[4820]: E0221 06:48:23.740473 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.741971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.742038 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844881 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.844927 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:23 crc kubenswrapper[4820]: I0221 06:48:23.947562 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:23Z","lastTransitionTime":"2026-02-21T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.050397 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.153572 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.256541 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.359636 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.462422 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.564866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.565548 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.668999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.669027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.679457 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:17:38.511263478 +0000 UTC Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.695982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.696296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.696294 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.696389 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:24 crc kubenswrapper[4820]: E0221 06:48:24.697171 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.697894 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.772278 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.874746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:24 crc kubenswrapper[4820]: I0221 06:48:24.977442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:24Z","lastTransitionTime":"2026-02-21T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.079840 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.118035 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.120723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.121165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.145495 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.163289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.178284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.182285 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.197211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.220046 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.251826 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.264956 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.277598 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.284415 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.295625 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.307427 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.325903 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.336372 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.349810 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.369417 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.380189 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386827 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.386839 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.394945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.411729 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.422609 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.488998 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.591758 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.681421 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:23:38.869580198 +0000 UTC Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.694566 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.695667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:25 crc kubenswrapper[4820]: E0221 06:48:25.695847 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.717731 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.737156 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.751490 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.761281 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.776059 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.789759 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796363 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.796469 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.806925 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.825901 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.843619 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.864088 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.882385 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.895570 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.899331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:25Z","lastTransitionTime":"2026-02-21T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.926157 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.942221 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.953833 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.966565 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.980451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:25 crc kubenswrapper[4820]: I0221 06:48:25.995071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:25Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.002290 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.104261 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.124577 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.125381 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/2.log" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128019 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" exitCode=1 Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128095 4820 scope.go:117] "RemoveContainer" containerID="9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.128729 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.128978 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.142491 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.160494 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.176513 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.190637 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.203308 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.207804 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.216273 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.230886 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.250478 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.262891 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.274597 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.289028 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.306122 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.309900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.325004 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.350072 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa5b8faa268fa9bd596e14c4cda36b81a37d907d95d2cab49ac1183a74dfdb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:47:57Z\\\",\\\"message\\\":\\\"e crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0221 06:47:57.487024 6520 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.363551 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.376176 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.387653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.402687 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:26Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.412974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.413053 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515619 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.515676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.617528 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.682134 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:11:00.672797712 +0000 UTC Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.696613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696622 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696778 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:26 crc kubenswrapper[4820]: E0221 06:48:26.696906 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.720507 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822880 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.822979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:26 crc kubenswrapper[4820]: I0221 06:48:26.925081 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:26Z","lastTransitionTime":"2026-02-21T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.027434 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.130370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.132351 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.135894 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:27 crc kubenswrapper[4820]: E0221 06:48:27.136115 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.148146 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce38546e-524f-4801-8ee1-b4bb9d6c6dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99e2ae812f4befd1922c3e4bcdd35a3d092e1c8a9e606d08f98b728df56c1e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzcs9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qth8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.163109 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.176186 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe47cd07b9e898aa1726a5629779d5afc16ea0e2ef0748522bda4d52733768a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a87d9bd08b96d48b46c464b85229883451e76c630982a35695e2e59bf7adbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.188006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5b221c8fd88579edb0366a668b4a1830c11f833d7a64de2179970316efc569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.199519 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tv4k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80b29fd0-922f-41c6-8ff4-dfa111ff89ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e11b3899c32b03f782add5cf67ed42c75a51ffd67657fcd1f442bc049282426\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhl67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tv4k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.209837 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t5qxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8767767-a460-416a-b2c2-82a8d9eebb1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a2dad0814df76dd62597c1e8a8a2168483bc0e5ed7b76c1544d176e5377c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nlct7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t5qxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.224045 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3e3d1-40db-4935-a10b-526d7b99b88e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda674d950abd26df96f3107a84c391f4234a41b026bf5280ba1c9484739451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26bf6358f495ba490be5a0eb3c5452cc11047f26b6033dd056b204cc1598d888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f858ecf8efe54cdc5a05d2733fd0ff450c0492bf3fe67008c6b1ccc00cff70c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.232440 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.236958 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d32e209e266b130adcf03e2a4a433822507f0e547c5202b64349d90721da651c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.250589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ec6947-ce15-48d4-8d3b-a557e59916be\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68ab6596dcba5f46e0bc6f7a8fce06e59b59e231b0ef55aac39dfb9bba655a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac589269abb4273900d9b20d51fa114db7f994ff5be47fcf21f54025c60192b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa49ea7daaebd20576c1d07922d8a19cad19b4fbe90647f5129cab7b1fab6174\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e04699187b5f2c6334eff56f86b0d4d08dd3fabf868157f1aa3259ac3764cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.262875 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.282521 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"086516d1-6ffd-4d1f-b222-898336aa9960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d1a58acccc322eb89f871a4ea3d9cb1afbacb13ba22e9ab37afa4762cdd3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edd23b0a2db49af9e58f9512644ba18a7ad056f78f11e9df99a774f0e8257297\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://304e00f098e5d85908090ce9a86ced8b9cd35fbee55dd2af6c1d07d35e01fa4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d404dc1776143d6cd0d95f97854125f155c4b51326d0f74370c5162ab2015b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de06b366b722598a66b00f5b007d1721cbb5ea5de65b2c8544551e71835e88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280c2dd55aab980304aac2cd22f85b806e2740924713c71ad4edd36fe4cd19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e024fa422d604850fe2ec8654ce07c68a00b41ac7263c40c39d7ef5bb5f6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-654bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xpb8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.296998 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-94gxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abdb469c-ba72-4790-9ce3-785f4facbcb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:14Z\\\",\\\"message\\\":\\\"2026-02-21T06:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b\\\\n2026-02-21T06:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32f9710f-3574-415f-9aff-fc5a067a401b to /host/opt/cni/bin/\\\\n2026-02-21T06:47:29Z [verbose] multus-daemon started\\\\n2026-02-21T06:47:29Z [verbose] Readiness Indicator file check\\\\n2026-02-21T06:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56bf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-94gxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.317050 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd5c3ae-202a-4133-9af8-c4f2e51eea00\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-21T06:47:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW0221 06:47:25.649196 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0221 06:47:25.649367 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0221 06:47:25.650147 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3148907248/tls.crt::/tmp/serving-cert-3148907248/tls.key\\\\\\\"\\\\nI0221 06:47:25.944778 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0221 06:47:25.947786 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0221 06:47:25.947810 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0221 06:47:25.947836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0221 06:47:25.947843 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0221 06:47:25.953730 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0221 06:47:25.953751 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0221 06:47:25.953755 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0221 06:47:25.953757 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0221 06:47:25.953759 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0221 06:47:25.953778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0221 06:47:25.953784 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0221 06:47:25.953789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0221 06:47:25.954769 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.334979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.339333 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.352021 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.362204 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.373963 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.389558 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:27Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.437925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.437987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.438059 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.540834 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.643459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.683012 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:50:25.716783025 +0000 UTC Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.696705 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:27 crc kubenswrapper[4820]: E0221 06:48:27.696901 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.746370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.848884 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:27 crc kubenswrapper[4820]: I0221 06:48:27.952692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:27Z","lastTransitionTime":"2026-02-21T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.054910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.054988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.055052 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.158412 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261364 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.261391 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363882 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.363990 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.466951 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.569598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.672520 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.683935 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:08:33.464087704 +0000 UTC Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696550 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.696573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696610 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696745 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:28 crc kubenswrapper[4820]: E0221 06:48:28.696955 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.775162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.877765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980928 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:28 crc kubenswrapper[4820]: I0221 06:48:28.980937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:28Z","lastTransitionTime":"2026-02-21T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.084597 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.187411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.289985 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392698 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.392771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499053 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.499208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.603160 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.684853 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:04:06.370953305 +0000 UTC Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.696394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:29 crc kubenswrapper[4820]: E0221 06:48:29.696567 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.705445 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.808800 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:29 crc kubenswrapper[4820]: I0221 06:48:29.911472 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:29Z","lastTransitionTime":"2026-02-21T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.014324 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.117222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.219769 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.322759 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.425977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.426003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.460543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.460824 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.460783992 +0000 UTC m=+149.493868230 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.529544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.562564 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562643 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562759 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562783 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562804 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562820 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562832 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562843 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562796 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.562755124 +0000 UTC m=+149.595839372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562900 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.562926 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.562902309 +0000 UTC m=+149.595986547 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.563155 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.563116515 +0000 UTC m=+149.596200763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.563192 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.563174796 +0000 UTC m=+149.596259034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.632866 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.686064 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:08:16.079601612 +0000 UTC Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.696633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696693 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696790 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:30 crc kubenswrapper[4820]: E0221 06:48:30.696969 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.735734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.838799 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:30 crc kubenswrapper[4820]: I0221 06:48:30.941973 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:30Z","lastTransitionTime":"2026-02-21T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.045112 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.148359 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250694 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.250716 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.353278 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.456635 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.559351 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.662773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.687546 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:53:48.010932744 +0000 UTC Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.695982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:31 crc kubenswrapper[4820]: E0221 06:48:31.696175 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.765956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.766092 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.869719 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972426 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:31 crc kubenswrapper[4820]: I0221 06:48:31.972444 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:31Z","lastTransitionTime":"2026-02-21T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.075797 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.177817 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.280987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.281004 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.384157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487100 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.487143 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591349 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.591466 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.688730 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:17:42.36893287 +0000 UTC Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.694174 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.696845 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.696915 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.697067 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:32 crc kubenswrapper[4820]: E0221 06:48:32.697226 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.797485 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:32 crc kubenswrapper[4820]: I0221 06:48:32.900441 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:32Z","lastTransitionTime":"2026-02-21T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.003336 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.105681 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.208517 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.311171 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.414687 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.516996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.517013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.517025 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.620618 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.689284 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:03:29.676898043 +0000 UTC Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.695732 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:33 crc kubenswrapper[4820]: E0221 06:48:33.695882 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.723159 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.826865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.827209 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:33 crc kubenswrapper[4820]: I0221 06:48:33.931176 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:33Z","lastTransitionTime":"2026-02-21T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.009745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.030367 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.036602 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.058353 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.064332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.091122 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096851 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096875 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.096893 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.111802 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.115251 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.128566 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-21T06:48:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e79a2b5c-f808-4b7b-b373-103b6d673828\\\",\\\"systemUUID\\\":\\\"ec2c7a4f-4f2f-4567-9af1-65fc234d8f80\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:34Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.128833 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.130710 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.233457 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.337463 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.440142 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543102 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.543124 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.645461 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.689844 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:12:23.009900437 +0000 UTC Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.696508 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.696599 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.696793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:34 crc kubenswrapper[4820]: E0221 06:48:34.697136 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.748765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.850494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:34 crc kubenswrapper[4820]: I0221 06:48:34.953301 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:34Z","lastTransitionTime":"2026-02-21T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.056505 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158884 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.158941 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.262631 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.365818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468089 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.468152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571398 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.571450 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.674857 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.690340 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:18:02.844426538 +0000 UTC Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.695726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:35 crc kubenswrapper[4820]: E0221 06:48:35.695922 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.715293 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6041592-1ddd-4646-be76-a73a95e200ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c306d4c25f357a240ee9368e84534bb70b88aa82665a61d627ed12f10f0a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44cb30253a7246c6f48f665893972fca40a385aa2d9dff02a9e44130fad5bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.733575 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.752867 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70ec449-ba11-47dd-a60c-f77993670045\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-21T06:48:25Z\\\",\\\"message\\\":\\\"empt(s)\\\\nI0221 06:48:25.634767 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0221 06:48:25.634977 6918 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0221 06:48:25.634798 6918 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635097 6918 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0221 06:48:25.635152 6918 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0221 06:48:25.635208 6918 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0221 06:48:25.634983 6918 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0221 06:48:25.635347 6918 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0221 06:48:25.635123 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-21T06:48:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-21T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-21T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wgvx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bvfjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.767940 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d837134d-9746-4fda-af7c-acf3077a61c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6170fc10348327ac2d97f250dc8ddcf8f46b94f900259ecfeec5e72fb539b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb48c2213469ea752dc689f39e96710d83c37fa6c66b0b2a39282bf3e2cc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-21T06:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8646\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hbb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.777578 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.784134 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4537dd3-6e3b-481a-9f90-668020b5558b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T06:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tf29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T06:47:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bt6wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-21T06:48:35Z is after 2025-08-24T17:21:41Z" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.868149 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.868129227 podStartE2EDuration="1m6.868129227s" podCreationTimestamp="2026-02-21 06:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.867928601 +0000 UTC m=+90.901012809" watchObservedRunningTime="2026-02-21 06:48:35.868129227 +0000 UTC m=+90.901213435" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.868422 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podStartSLOduration=69.868415115 podStartE2EDuration="1m9.868415115s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.853032738 +0000 UTC m=+90.886116936" watchObservedRunningTime="2026-02-21 06:48:35.868415115 +0000 UTC m=+90.901499323" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.880483 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.936520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tv4k8" podStartSLOduration=70.936499493 podStartE2EDuration="1m10.936499493s" podCreationTimestamp="2026-02-21 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.935728901 +0000 UTC m=+90.968813109" watchObservedRunningTime="2026-02-21 06:48:35.936499493 +0000 UTC m=+90.969583701" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.968785 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t5qxz" podStartSLOduration=69.96876818 podStartE2EDuration="1m9.96876818s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.951213301 +0000 UTC m=+90.984297509" watchObservedRunningTime="2026-02-21 06:48:35.96876818 +0000 UTC m=+91.001852388" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.968881 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.968876953 podStartE2EDuration="1m9.968876953s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.96842808 +0000 UTC m=+91.001512318" watchObservedRunningTime="2026-02-21 06:48:35.968876953 +0000 UTC m=+91.001961161" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983772 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:35Z","lastTransitionTime":"2026-02-21T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:35 crc kubenswrapper[4820]: I0221 06:48:35.983801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.983787667 podStartE2EDuration="39.983787667s" podCreationTimestamp="2026-02-21 06:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:35.983325464 +0000 UTC m=+91.016409702" watchObservedRunningTime="2026-02-21 06:48:35.983787667 +0000 UTC m=+91.016871875" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.016031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xpb8z" podStartSLOduration=70.016009573 podStartE2EDuration="1m10.016009573s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:36.01593106 +0000 UTC m=+91.049015278" watchObservedRunningTime="2026-02-21 06:48:36.016009573 +0000 UTC m=+91.049093781" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.060704 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-94gxr" podStartSLOduration=70.06068293 podStartE2EDuration="1m10.06068293s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:36.051210515 +0000 UTC m=+91.084294723" watchObservedRunningTime="2026-02-21 06:48:36.06068293 +0000 UTC m=+91.093767138" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.085654 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.189330 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.291625 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.394490 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.500953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.501376 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.605347 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.691420 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:40:47.237192615 +0000 UTC Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.695917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696137 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.696292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.696447 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696520 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:36 crc kubenswrapper[4820]: E0221 06:48:36.696926 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708872 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.708937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.812218 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916633 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:36 crc kubenswrapper[4820]: I0221 06:48:36.916683 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:36Z","lastTransitionTime":"2026-02-21T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.020213 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.123659 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.227739 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.332363 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.435877 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.538302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.641489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.692416 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:20:10.724603197 +0000 UTC Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.695942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:37 crc kubenswrapper[4820]: E0221 06:48:37.696306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.744955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.745119 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.847967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.848082 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.951961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:37 crc kubenswrapper[4820]: I0221 06:48:37.952066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:37Z","lastTransitionTime":"2026-02-21T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.054682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.054976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.055452 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.158313 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.261429 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.364935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.365077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.468540 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.572296 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.675974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.676004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.676028 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.692645 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:02:27.656622339 +0000 UTC Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696054 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.696063 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696344 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696483 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:38 crc kubenswrapper[4820]: E0221 06:48:38.696628 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.779499 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881463 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.881530 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.984985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:38 crc kubenswrapper[4820]: I0221 06:48:38.985133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:38Z","lastTransitionTime":"2026-02-21T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.088282 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.190760 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.293786 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.396725 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.500203 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.603340 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.693802 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:06:28.565504675 +0000 UTC Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.696294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:39 crc kubenswrapper[4820]: E0221 06:48:39.696492 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.697433 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:39 crc kubenswrapper[4820]: E0221 06:48:39.697681 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.704998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.705014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.807482 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910867 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:39 crc kubenswrapper[4820]: I0221 06:48:39.910902 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:39Z","lastTransitionTime":"2026-02-21T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.014134 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.117562 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219361 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.219432 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.322222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.424874 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.526938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527059 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.527087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.630103 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.694921 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:09:37.126095699 +0000 UTC Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696133 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.696156 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696285 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696394 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:40 crc kubenswrapper[4820]: E0221 06:48:40.696530 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.732267 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.852532 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:40 crc kubenswrapper[4820]: I0221 06:48:40.955910 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:40Z","lastTransitionTime":"2026-02-21T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.059708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.060105 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.165967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.166994 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269598 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.269700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.372133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.474276 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.576721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.679320 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.696049 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:29:25.032962181 +0000 UTC Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.696098 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:41 crc kubenswrapper[4820]: E0221 06:48:41.696384 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.782302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.884967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.885137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.987970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:41 crc kubenswrapper[4820]: I0221 06:48:41.988087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:41Z","lastTransitionTime":"2026-02-21T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.090798 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.193551 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.296640 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.399442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.502137 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.604872 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695903 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.695828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.695989 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.696140 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.696265 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:34:18.725339204 +0000 UTC Feb 21 06:48:42 crc kubenswrapper[4820]: E0221 06:48:42.696334 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.707991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.708012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.708030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.810826 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:42 crc kubenswrapper[4820]: I0221 06:48:42.913378 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:42Z","lastTransitionTime":"2026-02-21T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.016708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.017662 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.119870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.120742 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.223856 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.326995 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327491 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.327953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.328180 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.430715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.430970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.431151 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.533790 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.636733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.696403 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:18:25.897209873 +0000 UTC Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.696481 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:43 crc kubenswrapper[4820]: E0221 06:48:43.696648 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738843 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.738862 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.841379 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:43 crc kubenswrapper[4820]: I0221 06:48:43.944332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:43Z","lastTransitionTime":"2026-02-21T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046876 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.046978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.148978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.149053 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.172813 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-21T06:48:44Z","lastTransitionTime":"2026-02-21T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.224297 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz"] Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.224822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.227278 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.227909 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.228112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.230686 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.257669 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.25764439 podStartE2EDuration="30.25764439s" podCreationTimestamp="2026-02-21 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:44.256771326 +0000 UTC m=+99.289855554" watchObservedRunningTime="2026-02-21 06:48:44.25764439 +0000 UTC m=+99.290728628" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.325833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.325974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.326200 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.327331 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hbb7" podStartSLOduration=77.327318885 podStartE2EDuration="1m17.327318885s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:44.326430948 +0000 UTC m=+99.359515186" watchObservedRunningTime="2026-02-21 06:48:44.327318885 +0000 UTC m=+99.360403083" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427639 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.427800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.428143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2852f5c7-0618-4070-a98c-3e5f6bc98db0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.429121 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2852f5c7-0618-4070-a98c-3e5f6bc98db0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.441815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2852f5c7-0618-4070-a98c-3e5f6bc98db0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.445049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2852f5c7-0618-4070-a98c-3e5f6bc98db0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vd9hz\" (UID: \"2852f5c7-0618-4070-a98c-3e5f6bc98db0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.548558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696687 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696733 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696707 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:44:09.974763396 +0000 UTC Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696819 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.696703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.696855 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.697003 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:44 crc kubenswrapper[4820]: E0221 06:48:44.697090 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:44 crc kubenswrapper[4820]: I0221 06:48:44.704157 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.196490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" event={"ID":"2852f5c7-0618-4070-a98c-3e5f6bc98db0","Type":"ContainerStarted","Data":"46a0a5b88337d0bc4f6d9e2d704d310e4730925f8729358746eb8ca7bc193bca"} Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.196824 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" event={"ID":"2852f5c7-0618-4070-a98c-3e5f6bc98db0","Type":"ContainerStarted","Data":"75e6cfb089709d4632a94173152d55fd4b9000f3e0a8900ac7a6200f851ca067"} Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.208998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vd9hz" podStartSLOduration=79.208978996 podStartE2EDuration="1m19.208978996s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:45.208539003 +0000 UTC m=+100.241623201" watchObservedRunningTime="2026-02-21 06:48:45.208978996 +0000 UTC m=+100.242063194" Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.541695 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.541822 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.541875 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs podName:a4537dd3-6e3b-481a-9f90-668020b5558b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:49.541857465 +0000 UTC m=+164.574941673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs") pod "network-metrics-daemon-bt6wj" (UID: "a4537dd3-6e3b-481a-9f90-668020b5558b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 21 06:48:45 crc kubenswrapper[4820]: I0221 06:48:45.697613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:45 crc kubenswrapper[4820]: E0221 06:48:45.697698 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.695911 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.696030 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:46 crc kubenswrapper[4820]: I0221 06:48:46.695850 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:46 crc kubenswrapper[4820]: E0221 06:48:46.696230 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:47 crc kubenswrapper[4820]: I0221 06:48:47.696875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:47 crc kubenswrapper[4820]: E0221 06:48:47.697365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:47 crc kubenswrapper[4820]: I0221 06:48:47.718842 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695913 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:48 crc kubenswrapper[4820]: I0221 06:48:48.695998 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696233 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:48 crc kubenswrapper[4820]: E0221 06:48:48.696432 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:49 crc kubenswrapper[4820]: I0221 06:48:49.695872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:49 crc kubenswrapper[4820]: E0221 06:48:49.696379 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:50 crc kubenswrapper[4820]: I0221 06:48:50.696409 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696677 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696831 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:50 crc kubenswrapper[4820]: E0221 06:48:50.696975 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:51 crc kubenswrapper[4820]: I0221 06:48:51.696103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:51 crc kubenswrapper[4820]: E0221 06:48:51.696368 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696312 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:52 crc kubenswrapper[4820]: I0221 06:48:52.696473 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697087 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697445 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:52 crc kubenswrapper[4820]: E0221 06:48:52.697664 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:53 crc kubenswrapper[4820]: I0221 06:48:53.696389 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:53 crc kubenswrapper[4820]: E0221 06:48:53.696830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.695798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.695867 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.695995 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.696114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:54 crc kubenswrapper[4820]: I0221 06:48:54.697065 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:48:54 crc kubenswrapper[4820]: E0221 06:48:54.697328 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bvfjp_openshift-ovn-kubernetes(a70ec449-ba11-47dd-a60c-f77993670045)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" Feb 21 06:48:55 crc kubenswrapper[4820]: I0221 06:48:55.696029 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:55 crc kubenswrapper[4820]: E0221 06:48:55.698570 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:55 crc kubenswrapper[4820]: I0221 06:48:55.741036 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.741000214 podStartE2EDuration="8.741000214s" podCreationTimestamp="2026-02-21 06:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:48:55.738999715 +0000 UTC m=+110.772083933" watchObservedRunningTime="2026-02-21 06:48:55.741000214 +0000 UTC m=+110.774084452" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:56 crc kubenswrapper[4820]: I0221 06:48:56.696842 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697004 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697126 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:56 crc kubenswrapper[4820]: E0221 06:48:56.697274 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:57 crc kubenswrapper[4820]: I0221 06:48:57.696483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:57 crc kubenswrapper[4820]: E0221 06:48:57.696648 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696435 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696472 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696546 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:48:58 crc kubenswrapper[4820]: I0221 06:48:58.696596 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:48:58 crc kubenswrapper[4820]: E0221 06:48:58.696768 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:48:59 crc kubenswrapper[4820]: I0221 06:48:59.696326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:48:59 crc kubenswrapper[4820]: E0221 06:48:59.696850 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.696023 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.695999 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:00 crc kubenswrapper[4820]: I0221 06:49:00.696106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696178 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696299 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:00 crc kubenswrapper[4820]: E0221 06:49:00.696791 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.300471 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301129 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/0.log" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301215 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" exitCode=1 Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301369 4820 scope.go:117] "RemoveContainer" containerID="27faccfa7726c3c4fd6a61efc6d0fe1c64d3db2bf75a5473a53c4c849130be0a" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.301897 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:49:01 crc kubenswrapper[4820]: E0221 06:49:01.302140 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:49:01 crc kubenswrapper[4820]: I0221 06:49:01.696553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:01 crc kubenswrapper[4820]: E0221 06:49:01.696699 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.305891 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695710 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:02 crc kubenswrapper[4820]: I0221 06:49:02.695740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.695908 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.696114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:02 crc kubenswrapper[4820]: E0221 06:49:02.696215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:03 crc kubenswrapper[4820]: I0221 06:49:03.696368 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:03 crc kubenswrapper[4820]: E0221 06:49:03.696673 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696025 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:04 crc kubenswrapper[4820]: I0221 06:49:04.695896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696115 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:04 crc kubenswrapper[4820]: E0221 06:49:04.696392 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.669254 4820 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 21 06:49:05 crc kubenswrapper[4820]: I0221 06:49:05.696651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.697451 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:05 crc kubenswrapper[4820]: E0221 06:49:05.838614 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.695989 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.696109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:06 crc kubenswrapper[4820]: I0221 06:49:06.697083 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697201 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697407 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:06 crc kubenswrapper[4820]: E0221 06:49:06.697572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:07 crc kubenswrapper[4820]: I0221 06:49:07.696547 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:07 crc kubenswrapper[4820]: E0221 06:49:07.696731 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696323 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:08 crc kubenswrapper[4820]: I0221 06:49:08.696315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696498 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696642 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:08 crc kubenswrapper[4820]: E0221 06:49:08.696976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:09 crc kubenswrapper[4820]: I0221 06:49:09.695840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:09 crc kubenswrapper[4820]: E0221 06:49:09.696053 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:09 crc kubenswrapper[4820]: I0221 06:49:09.696796 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.336019 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.338451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerStarted","Data":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.338819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.362874 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podStartSLOduration=103.362857057 podStartE2EDuration="1m43.362857057s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:10.362390773 +0000 UTC m=+125.395474981" watchObservedRunningTime="2026-02-21 06:49:10.362857057 +0000 UTC m=+125.395941255" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.631965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.632140 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.632367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.696656 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:10 crc kubenswrapper[4820]: I0221 06:49:10.696707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.696778 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.696839 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:10 crc kubenswrapper[4820]: E0221 06:49:10.840216 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:49:11 crc kubenswrapper[4820]: I0221 06:49:11.696636 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:11 crc kubenswrapper[4820]: E0221 06:49:11.696766 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:11 crc kubenswrapper[4820]: I0221 06:49:11.696842 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:11 crc kubenswrapper[4820]: E0221 06:49:11.697063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:12 crc kubenswrapper[4820]: I0221 06:49:12.696559 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:12 crc kubenswrapper[4820]: I0221 06:49:12.696586 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:12 crc kubenswrapper[4820]: E0221 06:49:12.696673 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:12 crc kubenswrapper[4820]: E0221 06:49:12.696789 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696542 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:13 crc kubenswrapper[4820]: E0221 06:49:13.696682 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:13 crc kubenswrapper[4820]: I0221 06:49:13.696808 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:49:13 crc kubenswrapper[4820]: E0221 06:49:13.696808 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.351877 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.351925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2"} Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.623923 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.695997 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:14 crc kubenswrapper[4820]: I0221 06:49:14.696030 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:14 crc kubenswrapper[4820]: E0221 06:49:14.696462 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 21 06:49:14 crc kubenswrapper[4820]: E0221 06:49:14.696549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 21 06:49:15 crc kubenswrapper[4820]: I0221 06:49:15.695846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:15 crc kubenswrapper[4820]: I0221 06:49:15.695901 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:15 crc kubenswrapper[4820]: E0221 06:49:15.698037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 21 06:49:15 crc kubenswrapper[4820]: E0221 06:49:15.698259 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bt6wj" podUID="a4537dd3-6e3b-481a-9f90-668020b5558b" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.696318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.696317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.699088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 06:49:16 crc kubenswrapper[4820]: I0221 06:49:16.699629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.696264 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.696378 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.699909 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700043 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700125 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 06:49:17 crc kubenswrapper[4820]: I0221 06:49:17.700420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.249575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.297429 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.298008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.298627 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.299316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.299752 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-97n76"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.300392 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.300738 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301563 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.301877 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.302013 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.302384 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.303435 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304350 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.304650 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.307169 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308365 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308512 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308764 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308816 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.308903 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.309185 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.310021 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.310083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.312730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.315130 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.316898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.317151 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.317728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319323 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319571 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319673 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319800 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319979 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320034 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.319983 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320289 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320292 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.320177 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.321899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.322712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323690 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323710 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.323966 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.324174 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.325483 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326725 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.326954 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.339782 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.340671 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.341552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.342782 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343097 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343520 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.343817 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344402 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.344891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.345101 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.345579 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.352444 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357009 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357116 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357040 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357436 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357580 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.357908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.358489 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.358681 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.359876 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.360263 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.360829 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361098 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.361630 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.362103 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.362813 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.363611 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.363672 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.364966 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.366146 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.366568 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374635 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374644 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.374907 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.375762 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.376897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378140 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378819 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378925 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.378816 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379128 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379496 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.379801 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380118 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380389 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380568 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.381052 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.380586 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.385649 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.385850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.387010 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.387088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.390059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.391345 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.391853 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.392993 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393653 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393693 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393901 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.393981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.394086 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.399342 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.400427 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.401978 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.402441 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.404135 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.404726 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405078 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405274 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.405840 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406403 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406428 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406528 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.406412 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.407744 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422057 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422300 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.422725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.423390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.423824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.426535 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-q9pg5"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.428913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.433796 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.434463 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.441639 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.447303 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.447423 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450632 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.451056 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.450943 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.451851 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.452327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.459122 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.462371 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.462811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.463185 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.463683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.464458 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.464908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.485798 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486321 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.487503 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.486408 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.487975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.491988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.492456 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493257 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493542 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.493593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494168 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.494563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495141 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495181 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495190 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495322 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495425 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495490 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495611 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496527 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.495747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.497285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-policies\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.497471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.498002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.496524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6775f10-01f3-4263-8441-ec5be6baf5c3-audit-dir\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-auth-proxy-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-config\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.501271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-config\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502569 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.502693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.503142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.503520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-serving-cert\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.504347 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-etcd-client\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8add43c0-9280-4e92-b4fe-4628eb645e56-images\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6775f10-01f3-4263-8441-ec5be6baf5c3-encryption-config\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.505384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.506384 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.506476 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.507738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.508858 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.510506 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.511644 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.512966 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.513177 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.517945 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.518670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.518385 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.521336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8add43c0-9280-4e92-b4fe-4628eb645e56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.521625 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.522228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-machine-approver-tls\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.527303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.527378 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.529725 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.530787 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.534655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.534730 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.536438 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.538713 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.539220 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.539754 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.541085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.541929 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.543023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.544450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.545427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.546604 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.547828 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.549506 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.550854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.552750 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.553517 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.554286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.554476 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.555300 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.556531 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.557693 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.557864 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.559100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.559505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.561734 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.562838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.564180 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.565367 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.566521 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.567732 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.568743 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-z7jtv"] Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.569327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.580230 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596878 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596952 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.596990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597751 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.597935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-config\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.598482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599043 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e921dcf-57ab-41e2-9994-fb602aeec37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.599975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e921dcf-57ab-41e2-9994-fb602aeec37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601482 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.601992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-service-ca\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.603344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-etcd-client\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.604180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77141b4f-e31f-4e63-a5cb-329ea918a5ed-serving-cert\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.619620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.639420 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.660332 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.679501 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.691619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/86913b03-f631-4bfa-8533-c43326d364ff-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.709777 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.711249 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86913b03-f631-4bfa-8533-c43326d364ff-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.719626 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.740472 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.780103 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.799785 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.820012 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.839795 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.859720 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.879805 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.899656 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.919855 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.940057 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.959920 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 06:49:25 crc kubenswrapper[4820]: I0221 06:49:25.980060 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.000231 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.019839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.040394 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.060193 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.080259 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.099860 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.119894 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.140972 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.159832 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.180097 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242757 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.242841 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.259828 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.280929 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.299686 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.319389 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.340407 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.359835 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.381004 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.419462 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.440643 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.460562 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.480540 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.499232 4820 request.go:700] Waited for 1.006703325s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0 Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.500525 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.520043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.539780 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.560211 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.580099 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.599711 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.620260 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.640430 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.660615 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.680333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.700558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.720268 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.740492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.760810 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.779880 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.800158 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.834170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6jbs\" (UniqueName: \"kubernetes.io/projected/8add43c0-9280-4e92-b4fe-4628eb645e56-kube-api-access-c6jbs\") pod \"machine-api-operator-5694c8668f-zz4sx\" (UID: \"8add43c0-9280-4e92-b4fe-4628eb645e56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.841431 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.860435 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.880677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.887990 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.899845 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.920538 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.953712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"controller-manager-879f6c89f-dhsbz\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.975470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjhn\" (UniqueName: \"kubernetes.io/projected/b6775f10-01f3-4263-8441-ec5be6baf5c3-kube-api-access-lgjhn\") pod \"apiserver-7bbb656c7d-n6xh6\" (UID: \"b6775f10-01f3-4263-8441-ec5be6baf5c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:26 crc kubenswrapper[4820]: I0221 06:49:26.997332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqm5\" (UniqueName: \"kubernetes.io/projected/b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52-kube-api-access-cfqm5\") pod \"machine-approver-56656f9798-97n76\" (UID: \"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.012810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"route-controller-manager-6576b87f9c-4vn9x\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.019921 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.039820 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.040108 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zz4sx"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.060676 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.085522 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.100147 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.120175 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.134375 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.140033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.161298 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.180881 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.200642 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.204188 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.217326 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb92ec3e3_a4a6_4b99_9a3c_d1b97369ab52.slice/crio-9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb WatchSource:0}: Error finding container 9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb: Status 404 returned error can't find the container with id 9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.221899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.231138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.240800 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.253293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.260957 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.280320 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.302077 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.320836 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.324122 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.342548 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.354278 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda584a459_0672_47ef_bb32_c79f31790f91.slice/crio-9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f WatchSource:0}: Error finding container 9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f: Status 404 returned error can't find the container with id 9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.360202 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.381251 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.393468 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.400533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.416396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"9e7fafa3c4359250136e12b475ed965f7c6148a195bcab11c99ebf50b0fc1bcb"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.417636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerStarted","Data":"9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.418708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerStarted","Data":"4d78f1e45a0c6a4cb8ba55254cd92ac8d35c6e02d5bd767c1be192646a5e40fd"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.419558 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"690e697871c2ef33d8bb9bf3c685a48886157fe3db6ea51bd61104436932f421"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"551c77a07bc56850fd3be70039a389a75dd8f94222cc9946cc798296a3fb147a"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.420101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" event={"ID":"8add43c0-9280-4e92-b4fe-4628eb645e56","Type":"ContainerStarted","Data":"b1908d8bf2cfb08c9868b23cd01d73eb9ff5b4ae3d82621bd62397583a9215c6"} Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.431306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6"] Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.436042 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6775f10_01f3_4263_8441_ec5be6baf5c3.slice/crio-e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab WatchSource:0}: Error finding container e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab: Status 404 returned error can't find the container with id e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.439647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.461041 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.499501 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbf4j\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-kube-api-access-nbf4j\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.512546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.519218 4820 request.go:700] Waited for 1.919970283s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.535901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86913b03-f631-4bfa-8533-c43326d364ff-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrv9w\" (UID: \"86913b03-f631-4bfa-8533-c43326d364ff\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.553715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwd4p\" (UniqueName: \"kubernetes.io/projected/6e921dcf-57ab-41e2-9994-fb602aeec37f-kube-api-access-fwd4p\") pod \"cluster-image-registry-operator-dc59b4c8b-wwvxr\" (UID: \"6e921dcf-57ab-41e2-9994-fb602aeec37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.575937 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-987r7\" (UniqueName: \"kubernetes.io/projected/77141b4f-e31f-4e63-a5cb-329ea918a5ed-kube-api-access-987r7\") pod \"etcd-operator-b45778765-cgbv7\" (UID: \"77141b4f-e31f-4e63-a5cb-329ea918a5ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655190 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655658 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.655987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656116 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656323 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656638 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656856 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.656917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.662121 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.162109391 +0000 UTC m=+143.195193589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.664653 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.692469 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.698941 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.763848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764057 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764106 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.764183 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.26415039 +0000 UTC m=+143.297234588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.764882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.764998 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.264987176 +0000 UTC m=+143.298071434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765060 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765215 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765731 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-image-import-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee60016-61c2-4f4d-b181-59c1def12eef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.765880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766163 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.766698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22582d21-813c-49a4-aa49-e4a7d3f0f638-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.767663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-audit-dir\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.768935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-serving-cert\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.769986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.771180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.771413 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.770960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772537 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772675 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772798 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee60016-61c2-4f4d-b181-59c1def12eef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772934 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.772977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773138 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773280 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582d21-813c-49a4-aa49-e4a7d3f0f638-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.773986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774014 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774157 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774215 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774464 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/228a9802-8837-425d-ab0f-72c79dbc4399-node-pullsecrets\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.775514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.775689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f83dc0-1687-4716-b61f-e7bbb921d1c2-serving-cert\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.776707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.776967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.774586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.777655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778600 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.778986 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-trusted-ca\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779552 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-config\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-audit\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f83dc0-1687-4716-b61f-e7bbb921d1c2-config\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.779915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-serving-ca\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.780758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.784758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-encryption-config\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.784941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-etcd-client\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785195 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785724 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.785830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.788814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.789147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.793778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228a9802-8837-425d-ab0f-72c79dbc4399-serving-cert\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.800789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kl5\" (UniqueName: \"kubernetes.io/projected/aee60016-61c2-4f4d-b181-59c1def12eef-kube-api-access-l2kl5\") pod \"openshift-controller-manager-operator-756b6f6bc6-jmjng\" (UID: \"aee60016-61c2-4f4d-b181-59c1def12eef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.817919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"console-f9d7485db-cgbzf\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.854947 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"oauth-openshift-558db77b4-f6j4c\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.876095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krddv\" (UniqueName: \"kubernetes.io/projected/228a9802-8837-425d-ab0f-72c79dbc4399-kube-api-access-krddv\") pod \"apiserver-76f77b778f-nnhcf\" (UID: \"228a9802-8837-425d-ab0f-72c79dbc4399\") " pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880583 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880803 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880881 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880899 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.880991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881300 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881417 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881895 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.881991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.882007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.882022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.882272 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.382258317 +0000 UTC m=+143.415342515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.883047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.885890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-metrics-tls\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.886370 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.886412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.888199 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-mountpoint-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-socket-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.889944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-registration-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.891345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-plugins-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.891380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23e5da7c-be56-4259-ab49-bf8ad50831fe-service-ca-bundle\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34dd983a-2ee5-48ad-8858-59e9c0cbf483-tmpfs\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.892976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d20e62-3892-4e70-adad-754ac75dd1b9-config\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.893460 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128520ce-9a27-454a-8394-efae24e83a7c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.894084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b64a6e2-e14a-4de0-8630-e617a55b0794-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.895100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177c9eb7-021d-4d7f-a044-8913469b4236-config-volume\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.896437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-metrics-certs\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.896435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-images\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.897300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-profile-collector-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj4k\" (UniqueName: \"kubernetes.io/projected/7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3-kube-api-access-vcj4k\") pod \"cluster-samples-operator-665b6dd947-jlhnm\" (UID: \"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1df285f9-7ae4-4fea-8817-0a7e5e851551-cert\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951903b-474b-4279-b6ad-ab8920fd2d5b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.898910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b7322fd9-681a-4d9a-83ac-9e74308f8747-csi-data-dir\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.899179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e86cdb-22d7-424c-a51e-61c1d7848655-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.899437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34dd983a-2ee5-48ad-8858-59e9c0cbf483-webhook-cert\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.900562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc95478e-4574-4010-8833-5da4ec1987b3-signing-cabundle\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.901127 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9185d26f-44b3-45e3-9417-11148a03a52d-config\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.901360 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/128520ce-9a27-454a-8394-efae24e83a7c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.904611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-certs\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.906438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.907519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/754cb6b5-90c5-4747-8ef0-28a7c6b02448-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.908118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7e86cdb-22d7-424c-a51e-61c1d7848655-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.908499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-srv-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9185d26f-44b3-45e3-9417-11148a03a52d-serving-cert\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909495 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/021bee51-757d-4fcb-97b6-af9ad74d569c-srv-cert\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.909530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d20e62-3892-4e70-adad-754ac75dd1b9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.913547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-node-bootstrap-token\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.915142 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cgbv7"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.915872 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951903b-474b-4279-b6ad-ab8920fd2d5b-config\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.916712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc95478e-4574-4010-8833-5da4ec1987b3-signing-key\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.917309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8462a28b-a255-4ec7-9e85-cb98c6666e68-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.917393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-proxy-tls\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.919152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177c9eb7-021d-4d7f-a044-8913469b4236-metrics-tls\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.921064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-proxy-tls\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.921728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-stats-auth\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.922223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23e5da7c-be56-4259-ab49-bf8ad50831fe-default-certificate\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.924022 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.927152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5vg\" (UniqueName: \"kubernetes.io/projected/22582d21-813c-49a4-aa49-e4a7d3f0f638-kube-api-access-4p5vg\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6qt\" (UID: \"22582d21-813c-49a4-aa49-e4a7d3f0f638\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.931491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.933146 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77141b4f_e31f_4e63_a5cb_329ea918a5ed.slice/crio-da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320 WatchSource:0}: Error finding container da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320: Status 404 returned error can't find the container with id da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320 Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.938457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.938648 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.945545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.957744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvm7b\" (UniqueName: \"kubernetes.io/projected/35f83dc0-1687-4716-b61f-e7bbb921d1c2-kube-api-access-lvm7b\") pod \"console-operator-58897d9998-4kcq6\" (UID: \"35f83dc0-1687-4716-b61f-e7bbb921d1c2\") " pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.966479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w"] Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.971121 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.977533 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqzfz\" (UniqueName: \"kubernetes.io/projected/4cefa9c1-919e-4edc-95c9-d26c4f8f254f-kube-api-access-wqzfz\") pod \"authentication-operator-69f744f599-4dt74\" (UID: \"4cefa9c1-919e-4edc-95c9-d26c4f8f254f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.983162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.983302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" Feb 21 06:49:27 crc kubenswrapper[4820]: E0221 06:49:27.983497 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.483483911 +0000 UTC m=+143.516568109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:27 crc kubenswrapper[4820]: W0221 06:49:27.986866 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86913b03_f631_4bfa_8533_c43326d364ff.slice/crio-986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5 WatchSource:0}: Error finding container 986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5: Status 404 returned error can't find the container with id 986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5 Feb 21 06:49:27 crc kubenswrapper[4820]: I0221 06:49:27.997482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.025907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sw9\" (UniqueName: \"kubernetes.io/projected/c8bb35a2-6708-4267-bb44-d80ff0e0ccc8-kube-api-access-d2sw9\") pod \"openshift-config-operator-7777fb866f-vh8c8\" (UID: \"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.056627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0951903b-474b-4279-b6ad-ab8920fd2d5b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-shsgs\" (UID: \"0951903b-474b-4279-b6ad-ab8920fd2d5b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.076649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh7hg\" (UniqueName: \"kubernetes.io/projected/fc6a5b86-a925-4f00-b0ed-19717e7e1f09-kube-api-access-fh7hg\") pod \"dns-operator-744455d44c-lt58x\" (UID: \"fc6a5b86-a925-4f00-b0ed-19717e7e1f09\") " pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.083919 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.087747 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.587719286 +0000 UTC m=+143.620803484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.103610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfgv\" (UniqueName: \"kubernetes.io/projected/b7322fd9-681a-4d9a-83ac-9e74308f8747-kube-api-access-4bfgv\") pod \"csi-hostpathplugin-qhnw8\" (UID: \"b7322fd9-681a-4d9a-83ac-9e74308f8747\") " pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.118333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fft9q\" (UniqueName: \"kubernetes.io/projected/9185d26f-44b3-45e3-9417-11148a03a52d-kube-api-access-fft9q\") pod \"service-ca-operator-777779d784-tgf94\" (UID: \"9185d26f-44b3-45e3-9417-11148a03a52d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.126683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.137162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d20e62-3892-4e70-adad-754ac75dd1b9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2fm9m\" (UID: \"a3d20e62-3892-4e70-adad-754ac75dd1b9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.160531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtj9\" (UniqueName: \"kubernetes.io/projected/754cb6b5-90c5-4747-8ef0-28a7c6b02448-kube-api-access-6rtj9\") pod \"multus-admission-controller-857f4d67dd-bm22t\" (UID: \"754cb6b5-90c5-4747-8ef0-28a7c6b02448\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.172894 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.173317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.182002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.183399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7snp\" (UniqueName: \"kubernetes.io/projected/64747ec7-e06d-406d-8c6e-332b1cbe179f-kube-api-access-w7snp\") pod \"migrator-59844c95c7-ck2xk\" (UID: \"64747ec7-e06d-406d-8c6e-332b1cbe179f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.188025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.188711 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.688691982 +0000 UTC m=+143.721776180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.190542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.190736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.203509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.217957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"marketplace-operator-79b997595-k58x6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.232509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.236298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"collect-profiles-29527605-6xqg9\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.245608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7e86cdb-22d7-424c-a51e-61c1d7848655-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6jb4w\" (UID: \"c7e86cdb-22d7-424c-a51e-61c1d7848655\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.255171 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.256970 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.259135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw68v\" (UniqueName: \"kubernetes.io/projected/8462a28b-a255-4ec7-9e85-cb98c6666e68-kube-api-access-zw68v\") pod \"package-server-manager-789f6589d5-9w9rw\" (UID: \"8462a28b-a255-4ec7-9e85-cb98c6666e68\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.276636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9plh\" (UniqueName: \"kubernetes.io/projected/021bee51-757d-4fcb-97b6-af9ad74d569c-kube-api-access-d9plh\") pod \"catalog-operator-68c6474976-h6cdl\" (UID: \"021bee51-757d-4fcb-97b6-af9ad74d569c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.292669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.294211 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.794194956 +0000 UTC m=+143.827279154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.294284 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.296030 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4kcq6"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.299955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhs7f\" (UniqueName: \"kubernetes.io/projected/b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60-kube-api-access-xhs7f\") pod \"olm-operator-6b444d44fb-4dnsn\" (UID: \"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.318588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkgw8\" (UniqueName: \"kubernetes.io/projected/3b64a6e2-e14a-4de0-8630-e617a55b0794-kube-api-access-kkgw8\") pod \"control-plane-machine-set-operator-78cbb6b69f-zl5zd\" (UID: \"3b64a6e2-e14a-4de0-8630-e617a55b0794\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.333916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" Feb 21 06:49:28 crc kubenswrapper[4820]: W0221 06:49:28.335436 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee60016_61c2_4f4d_b181_59c1def12eef.slice/crio-08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6 WatchSource:0}: Error finding container 08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6: Status 404 returned error can't find the container with id 08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6 Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.338428 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5rl\" (UniqueName: \"kubernetes.io/projected/23e5da7c-be56-4259-ab49-bf8ad50831fe-kube-api-access-2z5rl\") pod \"router-default-5444994796-q9pg5\" (UID: \"23e5da7c-be56-4259-ab49-bf8ad50831fe\") " pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.342660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.349749 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.358370 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.359498 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx78k\" (UniqueName: \"kubernetes.io/projected/eca19fe2-b995-48cf-974d-e3fc59f8b9b3-kube-api-access-gx78k\") pod \"machine-config-controller-84d6567774-45qxr\" (UID: \"eca19fe2-b995-48cf-974d-e3fc59f8b9b3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.365966 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.376358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.395791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbm8\" (UniqueName: \"kubernetes.io/projected/34dd983a-2ee5-48ad-8858-59e9c0cbf483-kube-api-access-szbm8\") pod \"packageserver-d55dfcdfc-v8n56\" (UID: \"34dd983a-2ee5-48ad-8858-59e9c0cbf483\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.397025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.398269 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:28.898250835 +0000 UTC m=+143.931335033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.414807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzz4\" (UniqueName: \"kubernetes.io/projected/0e6916fe-cc6e-4d6e-89d0-a0d499ff028a-kube-api-access-ttzz4\") pod \"machine-config-operator-74547568cd-2jrlk\" (UID: \"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.416409 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.420529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5ns\" (UniqueName: \"kubernetes.io/projected/177c9eb7-021d-4d7f-a044-8913469b4236-kube-api-access-7k5ns\") pod \"dns-default-sps4j\" (UID: \"177c9eb7-021d-4d7f-a044-8913469b4236\") " pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.432279 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.440007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzbj\" (UniqueName: \"kubernetes.io/projected/fc95478e-4574-4010-8833-5da4ec1987b3-kube-api-access-lnzbj\") pod \"service-ca-9c57cc56f-fm6pk\" (UID: \"fc95478e-4574-4010-8833-5da4ec1987b3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.440505 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.447127 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.454831 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.461703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.462106 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerStarted","Data":"163e0224df79387e94d53de67771865cc2f448fe55307754f0c2f2e2575f77bd"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.463430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klv6p\" (UniqueName: \"kubernetes.io/projected/e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86-kube-api-access-klv6p\") pod \"machine-config-server-z7jtv\" (UID: \"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86\") " pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.467323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerStarted","Data":"0767d187d2981c7d5f1c668b318887301f7e5326b2d0aaa6f0c17cc8530104d7"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.469270 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.474273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"d389d881740115e89bc79d39f4df733cb9bf875d1fdcda64285a0a8ad8bf1d6d"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.474316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"986002aa23ccab6be3a4e3d8469782050e335a7456d1a10908b3268d1c08f1d5"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.475702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.482989 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.483587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fl94\" (UniqueName: \"kubernetes.io/projected/128520ce-9a27-454a-8394-efae24e83a7c-kube-api-access-9fl94\") pod \"kube-storage-version-migrator-operator-b67b599dd-9qjcj\" (UID: \"128520ce-9a27-454a-8394-efae24e83a7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.491909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerStarted","Data":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.492881 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.506696 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.507614 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nnhcf"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.507954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.508252 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.008223015 +0000 UTC m=+144.041307213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.508913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"668acdd49cb90779a5a4c90c70308189cbe53ef18f3d1f8f218e2da60e56e210"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512805 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6775f10-01f3-4263-8441-ec5be6baf5c3" containerID="f95713f2b6136ef696eafaeccf90f622806005b5330fb832ba828e64c86fa12b" exitCode=0 Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerDied","Data":"f95713f2b6136ef696eafaeccf90f622806005b5330fb832ba828e64c86fa12b"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.512928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerStarted","Data":"e4811bc3cd2f451b0ef29d261813103460d388e5a0a61cb580f2cb7e92dcfcab"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.513475 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf98x\" (UniqueName: \"kubernetes.io/projected/1df285f9-7ae4-4fea-8817-0a7e5e851551-kube-api-access-sf98x\") pod \"ingress-canary-m7pv7\" (UID: \"1df285f9-7ae4-4fea-8817-0a7e5e851551\") " pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.529394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" event={"ID":"6e921dcf-57ab-41e2-9994-fb602aeec37f","Type":"ContainerStarted","Data":"3ab60d9b3f52c70470320562b18fdde34067515c8b35d8dc6c25b6e59a035ed5"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.529452 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" event={"ID":"6e921dcf-57ab-41e2-9994-fb602aeec37f","Type":"ContainerStarted","Data":"03ba3f049be9777508c6bf133e2f4c8552c6d09792d0971a629871cc98dc8ffd"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.532393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hcxd\" (UniqueName: \"kubernetes.io/projected/8b5270e1-81d3-477a-96f9-b2cbc3090288-kube-api-access-4hcxd\") pod \"downloads-7954f5f757-kxrb8\" (UID: \"8b5270e1-81d3-477a-96f9-b2cbc3090288\") " pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.539196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z7jtv" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.546714 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerStarted","Data":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.547877 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.548914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.556197 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dhsbz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.556293 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.576770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" event={"ID":"35f83dc0-1687-4716-b61f-e7bbb921d1c2","Type":"ContainerStarted","Data":"296c1c427e3b90697c7d0dcd2e934a82975b980d70a3cab9c3a7d3ad43fcbfef"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.613123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.614703 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.114692638 +0000 UTC m=+144.147776836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.626343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.629713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"009711a7878119d1466526558a1345d8fbde1d13b4a5b3fc08ae790a869b47df"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.630182 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" event={"ID":"b92ec3e3-a4a6-4b99-9a3c-d1b97369ab52","Type":"ContainerStarted","Data":"0d7642ab5cfabcea5ad30e4983844870b8a7571f1f485792a8ad4f08f2d8a036"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.669363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.673812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" event={"ID":"77141b4f-e31f-4e63-a5cb-329ea918a5ed","Type":"ContainerStarted","Data":"6ea00a240ec6b4bd80c7ed6defb6fc48d83a0c5bf192140ba4f3f57d1c95b56e"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.673857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" event={"ID":"77141b4f-e31f-4e63-a5cb-329ea918a5ed","Type":"ContainerStarted","Data":"da9cd2c7bbfcd5cf1adefd8808be6f260ae90265718025da95a152328530f320"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.691362 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.691787 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.719758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.719843 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.219829181 +0000 UTC m=+144.252913379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.720106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.722838 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.222830672 +0000 UTC m=+144.255914870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.763001 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" event={"ID":"aee60016-61c2-4f4d-b181-59c1def12eef","Type":"ContainerStarted","Data":"08a35d772dee5ff6aa6b3e3a0e88a016649448957c3f42ce9386938e5a704fc6"} Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.777306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qhnw8"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.787503 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4dt74"] Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.812884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7pv7" Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.821206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.822248 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.322197399 +0000 UTC m=+144.355281597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.829395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.830050 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.330037857 +0000 UTC m=+144.363122055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.930695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.930893 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.430857898 +0000 UTC m=+144.463942106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:28 crc kubenswrapper[4820]: I0221 06:49:28.931816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:28 crc kubenswrapper[4820]: E0221 06:49:28.932410 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:29.432396185 +0000 UTC m=+144.465480383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.476461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.477787 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.477730907 +0000 UTC m=+145.510815135 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: W0221 06:49:29.513468 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7322fd9_681a_4d9a_83ac_9e74308f8747.slice/crio-250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024 WatchSource:0}: Error finding container 250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024: Status 404 returned error can't find the container with id 250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024 Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.524869 4820 csr.go:261] certificate signing request csr-v586p is approved, waiting to be issued Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.535729 4820 csr.go:257] certificate signing request csr-v586p is issued Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.552579 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.573636 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tgf94"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.578131 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.579102 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.079080824 +0000 UTC m=+145.112165022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.592628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.630256 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.636468 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr"] Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.684064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.684389 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.184379031 +0000 UTC m=+145.217463229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.736288 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-97n76" podStartSLOduration=124.736266322 podStartE2EDuration="2m4.736266322s" podCreationTimestamp="2026-02-21 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.685984051 +0000 UTC m=+144.719068249" watchObservedRunningTime="2026-02-21 06:49:29.736266322 +0000 UTC m=+144.769350520" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.785055 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" podStartSLOduration=122.785037838 podStartE2EDuration="2m2.785037838s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.784481291 +0000 UTC m=+144.817565509" watchObservedRunningTime="2026-02-21 06:49:29.785037838 +0000 UTC m=+144.818122036" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.785875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.786193 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.286178032 +0000 UTC m=+145.319262230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.863573 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f6j4c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.863624 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.877292 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zz4sx" podStartSLOduration=122.877278037 podStartE2EDuration="2m2.877278037s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.875889865 +0000 UTC m=+144.908974063" watchObservedRunningTime="2026-02-21 06:49:29.877278037 +0000 UTC m=+144.910362235" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.886680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:29 crc kubenswrapper[4820]: E0221 06:49:29.886959 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.386948372 +0000 UTC m=+145.420032570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.887947 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"119e4cae90399f87711a5e28a083ae9bfdf8f68b0235530588df32471875cf14"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888028 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z7jtv" event={"ID":"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86","Type":"ContainerStarted","Data":"c086d7926efb60ba245172a9705ea0699ff50fe3572e5d1260f212299ed45b3d"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerStarted","Data":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerStarted","Data":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888064 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" event={"ID":"0951903b-474b-4279-b6ad-ab8920fd2d5b","Type":"ContainerStarted","Data":"7fbbba8bc2f9c68f660d632bd811dc3f8c5587f1030f382a4f8d537dd54563e1"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.888074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q9pg5" event={"ID":"23e5da7c-be56-4259-ab49-bf8ad50831fe","Type":"ContainerStarted","Data":"a7c0a8555f6c5bdcdd16168a7270153195c381e910a38dfadc8c6d8accd39bcc"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.891381 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wwvxr" podStartSLOduration=123.891371007 podStartE2EDuration="2m3.891371007s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:29.890577983 +0000 UTC m=+144.923662181" watchObservedRunningTime="2026-02-21 06:49:29.891371007 +0000 UTC m=+144.924455195" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.891908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"6f75c114844d6958dd08d020548972dc253bb2aab663deb3b5b62ecce93bada1"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.895795 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"250c336f3378d1c658b088f9f25f0b5abbb212c6bfc3885252266de5d2d7a024"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.912443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" event={"ID":"aee60016-61c2-4f4d-b181-59c1def12eef","Type":"ContainerStarted","Data":"0e9394f904f8e69bd02c242770778e31a81212264bb19a6c916a15a7f90dfd7c"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.918529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" event={"ID":"35f83dc0-1687-4716-b61f-e7bbb921d1c2","Type":"ContainerStarted","Data":"ca66ca39715695f08524676ef281309b734ae222f7bc78c1b27e486b09f92969"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.920701 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.926013 4820 patch_prober.go:28] interesting pod/console-operator-58897d9998-4kcq6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.926060 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" podUID="35f83dc0-1687-4716-b61f-e7bbb921d1c2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.931750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" event={"ID":"86913b03-f631-4bfa-8533-c43326d364ff","Type":"ContainerStarted","Data":"0862da7cd9add89f3ef0abf2fb7a4fcb4b661cab58657af342c14fa20ceeecf2"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.938295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" event={"ID":"4cefa9c1-919e-4edc-95c9-d26c4f8f254f","Type":"ContainerStarted","Data":"ba44082e4cf50771d02085ed594cf66179a3893adfb4bcb572cc123a20fd2072"} Feb 21 06:49:29 crc kubenswrapper[4820]: I0221 06:49:29.953444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.004785 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.007862 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.507845124 +0000 UTC m=+145.540929322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.041027 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podStartSLOduration=124.040998754 podStartE2EDuration="2m4.040998754s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.027617457 +0000 UTC m=+145.060701655" watchObservedRunningTime="2026-02-21 06:49:30.040998754 +0000 UTC m=+145.074082972" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.103645 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cgbv7" podStartSLOduration=123.103626463 podStartE2EDuration="2m3.103626463s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.068589185 +0000 UTC m=+145.101673393" watchObservedRunningTime="2026-02-21 06:49:30.103626463 +0000 UTC m=+145.136710671" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.106961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.111195 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.611178652 +0000 UTC m=+145.644262840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.210507 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.211896 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.711866969 +0000 UTC m=+145.744951167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.214746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.222509 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.722491933 +0000 UTC m=+145.755576131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.229219 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" podStartSLOduration=124.229189718 podStartE2EDuration="2m4.229189718s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.223026129 +0000 UTC m=+145.256110327" watchObservedRunningTime="2026-02-21 06:49:30.229189718 +0000 UTC m=+145.262273916" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.315656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.316047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.816033463 +0000 UTC m=+145.849117661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.353792 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podStartSLOduration=124.353776352 podStartE2EDuration="2m4.353776352s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.35306438 +0000 UTC m=+145.386148578" watchObservedRunningTime="2026-02-21 06:49:30.353776352 +0000 UTC m=+145.386860550" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.386660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lt58x"] Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.417953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.418293 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:30.918281337 +0000 UTC m=+145.951365535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.519132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.520225 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.020209191 +0000 UTC m=+146.053293389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.537966 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-21 06:44:29 +0000 UTC, rotation deadline is 2026-11-04 07:38:35.25467705 +0000 UTC Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.538011 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6144h49m4.716669156s for next certificate rotation Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.570077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jmjng" podStartSLOduration=124.57006273 podStartE2EDuration="2m4.57006273s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.511334021 +0000 UTC m=+145.544418219" watchObservedRunningTime="2026-02-21 06:49:30.57006273 +0000 UTC m=+145.603146918" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.614581 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrv9w" podStartSLOduration=123.614562346 podStartE2EDuration="2m3.614562346s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.613564785 +0000 UTC m=+145.646648983" watchObservedRunningTime="2026-02-21 06:49:30.614562346 +0000 UTC m=+145.647646544" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.621000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.621421 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.121404525 +0000 UTC m=+146.154488723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.701063 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cgbzf" podStartSLOduration=124.701046761 podStartE2EDuration="2m4.701046761s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.699150182 +0000 UTC m=+145.732234390" watchObservedRunningTime="2026-02-21 06:49:30.701046761 +0000 UTC m=+145.734130959" Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.724878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.725135 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.225095893 +0000 UTC m=+146.258180091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.725247 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.725728 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.225719882 +0000 UTC m=+146.258804080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.827868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.828391 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.328372508 +0000 UTC m=+146.361456706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.931876 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:30 crc kubenswrapper[4820]: E0221 06:49:30.932380 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.432364107 +0000 UTC m=+146.465448305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.948124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" event={"ID":"9185d26f-44b3-45e3-9417-11148a03a52d","Type":"ContainerStarted","Data":"e1f0606b9400e1be111b8fa11abcf23669076791cf0deba2791f7aab27698ab8"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.953384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"b0b84810729f08f6f4a0a76d41d52e508b32f4e7c40166318bec1eec3e01a8be"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.955420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" event={"ID":"4cefa9c1-919e-4edc-95c9-d26c4f8f254f","Type":"ContainerStarted","Data":"613a9c144c73d7dd6ec5f12bb0d0662ce55ba82b427a1fa572bda8e7a2dfdd4d"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.958847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q9pg5" event={"ID":"23e5da7c-be56-4259-ab49-bf8ad50831fe","Type":"ContainerStarted","Data":"95ce60616c680a0a70b12deed4684771b8f6db766e3d757a3a62b08aab238c19"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.972392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"2894c8e47b473d8cee0f0aefd641f1f2683c3376a2bf810ffbcfc723b4ff70f8"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.972432 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"b50b0f22da172cb023b14eaa761cac26db58be900e058aa95640679057e584d7"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.977675 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" event={"ID":"b6775f10-01f3-4263-8441-ec5be6baf5c3","Type":"ContainerStarted","Data":"93680276b6385e733428fe927807c8cf85ff66c9bf91666f33c34a34f9ca4ebd"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.978767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"fa51decaa09b0bb3accdc2e5dcd9fc2db2b58b72a82bc27d54517e9a1e590d87"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.979299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" event={"ID":"22582d21-813c-49a4-aa49-e4a7d3f0f638","Type":"ContainerStarted","Data":"f22cd2b646f76376735fec05b746210afcf303b742216a5ba7e6c9bbca4e2cb9"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.979987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.980011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"810a68ff70084041c6eb28f80a1cb0b74ac91eb42a212ecb2271f8c2b08b95cf"} Feb 21 06:49:30 crc kubenswrapper[4820]: I0221 06:49:30.981025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb"} Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.001370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" event={"ID":"7b8733c0-83ba-4d53-8dbb-ef2ae65ce6d3","Type":"ContainerStarted","Data":"1aa2781f8a7ff2588e5f11dec6fdbb1dd65b5fc040d316949a44b5804a171cf6"} Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.022608 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4dt74" podStartSLOduration=125.022593525 podStartE2EDuration="2m5.022593525s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:30.997588533 +0000 UTC m=+146.030672721" watchObservedRunningTime="2026-02-21 06:49:31.022593525 +0000 UTC m=+146.055677723" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.040174 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.040756 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.540733067 +0000 UTC m=+146.573817255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.041562 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" podStartSLOduration=124.041541372 podStartE2EDuration="2m4.041541372s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.035911051 +0000 UTC m=+146.068995249" watchObservedRunningTime="2026-02-21 06:49:31.041541372 +0000 UTC m=+146.074625570" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.064555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-q9pg5" podStartSLOduration=124.064538622 podStartE2EDuration="2m4.064538622s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.063464589 +0000 UTC m=+146.096548787" watchObservedRunningTime="2026-02-21 06:49:31.064538622 +0000 UTC m=+146.097622820" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.141292 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.148854 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.64883601 +0000 UTC m=+146.681920278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.155781 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jlhnm" podStartSLOduration=125.155764181 podStartE2EDuration="2m5.155764181s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:31.148230382 +0000 UTC m=+146.181314580" watchObservedRunningTime="2026-02-21 06:49:31.155764181 +0000 UTC m=+146.188848379" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.204922 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.242359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.246982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.247413 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.747388743 +0000 UTC m=+146.780472941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.272948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.280458 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4kcq6" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.288310 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fm6pk"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.343784 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.346483 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8462a28b_a255_4ec7_9e85_cb98c6666e68.slice/crio-e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2 WatchSource:0}: Error finding container e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2: Status 404 returned error can't find the container with id e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.346663 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.346716 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.348732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.349216 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.849201993 +0000 UTC m=+146.882286191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.357345 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.361056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bm22t"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.366929 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56"] Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.372823 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fe97d7_1fa0_42aa_b8df_66f25aa6ee60.slice/crio-c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16 WatchSource:0}: Error finding container c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16: Status 404 returned error can't find the container with id c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.449421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.449812 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:31.949794097 +0000 UTC m=+146.982878295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.524450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kxrb8"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.550472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.550737 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.050726482 +0000 UTC m=+147.083810680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.563592 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.570287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.580547 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.595126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.609318 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sps4j"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.630876 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7pv7"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.630967 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.633508 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w"] Feb 21 06:49:31 crc kubenswrapper[4820]: W0221 06:49:31.644078 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b64a6e2_e14a_4de0_8630_e617a55b0794.slice/crio-69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651 WatchSource:0}: Error finding container 69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651: Status 404 returned error can't find the container with id 69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651 Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.651655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.652416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.152399829 +0000 UTC m=+147.185484027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.660317 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl"] Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.754162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.754446 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.254434227 +0000 UTC m=+147.287518425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.857889 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.858603 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.35858322 +0000 UTC m=+147.391667418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:31 crc kubenswrapper[4820]: I0221 06:49:31.960536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:31 crc kubenswrapper[4820]: E0221 06:49:31.960868 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.460858075 +0000 UTC m=+147.493942273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.061918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.062059 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.562040478 +0000 UTC m=+147.595124676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.062382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.062828 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.562811121 +0000 UTC m=+147.595895309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.070175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" event={"ID":"a3d20e62-3892-4e70-adad-754ac75dd1b9","Type":"ContainerStarted","Data":"75f5ecd9e136d582bb3fdae89f6b9224aaaa0e10f700bcc50c9d929b5f3898a2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.097725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" event={"ID":"0951903b-474b-4279-b6ad-ab8920fd2d5b","Type":"ContainerStarted","Data":"f268503a7e18f87985241acaf291dd3a92711e804b2f87db40310ef274db4e24"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.100738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"6aae7826ff160b1a23309dd94c12b3ff63c2f165074c23ce601001b0a1597c16"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.156485 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-shsgs" podStartSLOduration=125.156460363 podStartE2EDuration="2m5.156460363s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.14058258 +0000 UTC m=+147.173666778" watchObservedRunningTime="2026-02-21 06:49:32.156460363 +0000 UTC m=+147.189544561" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.157281 4820 generic.go:334] "Generic (PLEG): container finished" podID="c8bb35a2-6708-4267-bb44-d80ff0e0ccc8" containerID="664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e" exitCode=0 Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.157367 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerDied","Data":"664261fec01aa563e66f61e44ce1cbae1fe7a29a0ccd589db01d6e50f133905e"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.175019 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"781da944a068c97f47f327091b67409d1ecf3bfc685ab4af8b14a53542fa00f3"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.175629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.176142 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.676126273 +0000 UTC m=+147.709210461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.181815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" event={"ID":"128520ce-9a27-454a-8394-efae24e83a7c","Type":"ContainerStarted","Data":"1657bf67213cd1bbace5eec2cec8bec5a036cb00565c30662fb164524d3fdb2f"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.183697 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"fd00299839ff8b583b8f68c4fb8b34c1390dfeb7629ea3f2c2524369acef22d6"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.195232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" event={"ID":"3b64a6e2-e14a-4de0-8630-e617a55b0794","Type":"ContainerStarted","Data":"69ed9139cf87dd7cabd9542db7d4d1f039ff47d48a0a38ef4335ab56a783a651"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.205999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z7jtv" event={"ID":"e669b0b6-b9f5-4bf7-b18d-5fccc35dfd86","Type":"ContainerStarted","Data":"0d894d47147c87510f7b2f8db8380a0162734e3ab1023e7780225c516e083f30"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.220934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" event={"ID":"021bee51-757d-4fcb-97b6-af9ad74d569c","Type":"ContainerStarted","Data":"8df7b5a30dc2669ccc093edfe4d82344d5f472bcf7d22b0fa4e189008ed5304a"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" event={"ID":"34dd983a-2ee5-48ad-8858-59e9c0cbf483","Type":"ContainerStarted","Data":"b5cae302d3120e5ca6fb5eda19eb1d11062e10a1a00c6c23268c91d0c97d973b"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222676 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" event={"ID":"34dd983a-2ee5-48ad-8858-59e9c0cbf483","Type":"ContainerStarted","Data":"655e3135e8933c633ffbdf96fca90788386c6886ccd3b16d3f1333bb879ee01b"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.222989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.224589 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v8n56 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.224642 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" podUID="34dd983a-2ee5-48ad-8858-59e9c0cbf483" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.232721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerStarted","Data":"88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.242588 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-z7jtv" podStartSLOduration=7.242562046 podStartE2EDuration="7.242562046s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.231933813 +0000 UTC m=+147.265018021" watchObservedRunningTime="2026-02-21 06:49:32.242562046 +0000 UTC m=+147.275646244" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.253715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"033318c82ad418b91555861000478540445e6b68b26e38e0459b1fb47d5aed35"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.256456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.256753 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.261944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" event={"ID":"22582d21-813c-49a4-aa49-e4a7d3f0f638","Type":"ContainerStarted","Data":"b1589a3cd9e2b85240d12ce4483f39842ce6af690e6d0c59882ed375e457b7da"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.279604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.279742 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxrb8" event={"ID":"8b5270e1-81d3-477a-96f9-b2cbc3090288","Type":"ContainerStarted","Data":"d160963901a3108a72ebc4df3a2f84a300f9b921caf41105a23e76fb8cabeef1"} Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.281258 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.781223334 +0000 UTC m=+147.814307612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.297544 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" podStartSLOduration=125.297525121 podStartE2EDuration="2m5.297525121s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.262477793 +0000 UTC m=+147.295561991" watchObservedRunningTime="2026-02-21 06:49:32.297525121 +0000 UTC m=+147.330609319" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.298744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"5e8218923c3874ee7ca02e1574d8cafa2d64e6dd64d22a77d151a9a74e726ace"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.315647 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" podStartSLOduration=126.315614442 podStartE2EDuration="2m6.315614442s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.314314902 +0000 UTC m=+147.347399100" watchObservedRunningTime="2026-02-21 06:49:32.315614442 +0000 UTC m=+147.348698640" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.322928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" event={"ID":"fc95478e-4574-4010-8833-5da4ec1987b3","Type":"ContainerStarted","Data":"075d695218f3a3b627399b0b9bffc8957a71e4ecfdad6a8d24c8f6616e56ddf5"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.328906 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" podStartSLOduration=125.328888686 podStartE2EDuration="2m5.328888686s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.298481129 +0000 UTC m=+147.331565327" watchObservedRunningTime="2026-02-21 06:49:32.328888686 +0000 UTC m=+147.361972884" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.336300 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6qt" podStartSLOduration=126.336285211 podStartE2EDuration="2m6.336285211s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.335359583 +0000 UTC m=+147.368443781" watchObservedRunningTime="2026-02-21 06:49:32.336285211 +0000 UTC m=+147.369369409" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.350970 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:32 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:32 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:32 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.351025 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.363876 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" podStartSLOduration=125.363860351 podStartE2EDuration="2m5.363860351s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.363436008 +0000 UTC m=+147.396520206" watchObservedRunningTime="2026-02-21 06:49:32.363860351 +0000 UTC m=+147.396944549" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369870 4820 generic.go:334] "Generic (PLEG): container finished" podID="228a9802-8837-425d-ab0f-72c79dbc4399" containerID="d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb" exitCode=0 Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerDied","Data":"d86c5e28e8430789d19126b10551b6ecbc425f4dfa078132742ed0399d850bcb"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.369966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"a8d1f6410a441aab0514e5273b878fa229c4f99c14eebdcd025495462bbf0297"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.381743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.382182 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.882165708 +0000 UTC m=+147.915249906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.399966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" event={"ID":"eca19fe2-b995-48cf-974d-e3fc59f8b9b3","Type":"ContainerStarted","Data":"264d4c8de9572d2fcdac3d5e161f64b07d03df73777e0a8fa2405f68a6fd7160"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.420334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" event={"ID":"9185d26f-44b3-45e3-9417-11148a03a52d","Type":"ContainerStarted","Data":"38b52a348b47efeaf53e22da1432f82187d64641cb26195e01c692ec44875652"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.427749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" event={"ID":"c7e86cdb-22d7-424c-a51e-61c1d7848655","Type":"ContainerStarted","Data":"67d754f5f9200f4e1ce2f8d57c344ef661c4338710d6947334863da9e37e6488"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.437470 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-45qxr" podStartSLOduration=125.437453783 podStartE2EDuration="2m5.437453783s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.431959556 +0000 UTC m=+147.465043744" watchObservedRunningTime="2026-02-21 06:49:32.437453783 +0000 UTC m=+147.470537971" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.458670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerStarted","Data":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.458748 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerStarted","Data":"c67db1d6ea1ea9f42d159552b399ae3814a8a2a153770e3fc34b2a49bbb171e0"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.460147 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.479749 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tgf94" podStartSLOduration=125.47972268 podStartE2EDuration="2m5.47972268s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.479120181 +0000 UTC m=+147.512204379" watchObservedRunningTime="2026-02-21 06:49:32.47972268 +0000 UTC m=+147.512806868" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.483677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.485210 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:32.985193177 +0000 UTC m=+148.018277375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.487845 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.487884 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.511904 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podStartSLOduration=125.51188886 podStartE2EDuration="2m5.51188886s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.511264091 +0000 UTC m=+147.544348289" watchObservedRunningTime="2026-02-21 06:49:32.51188886 +0000 UTC m=+147.544973058" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.516715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"d21ba6d705d92695af74bc91029e641818054af691d6487b81a95e8fd151b912"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.516759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"e85903e8a3d5d2be134b0e41ecdb1ba65b7f136fd62acaf7e4952736910196b2"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.521046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" event={"ID":"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60","Type":"ContainerStarted","Data":"4c29039920c95eee332eecb7910c50771bfe18476a7e9f463874a6ebfb1dcaf5"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.521078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" event={"ID":"b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60","Type":"ContainerStarted","Data":"c111b497ffc66028b22cfd8f6a8f221d0617df14ae13cdfff3d804d1e90a8c16"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.522024 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.526619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7pv7" event={"ID":"1df285f9-7ae4-4fea-8817-0a7e5e851551","Type":"ContainerStarted","Data":"89b84e87bf852c692ff25a9e253f29c013798006dcfbb46b2aef64b5c1b037ba"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.545742 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" podStartSLOduration=125.545715731 podStartE2EDuration="2m5.545715731s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.539053107 +0000 UTC m=+147.572137305" watchObservedRunningTime="2026-02-21 06:49:32.545715731 +0000 UTC m=+147.578799929" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.555091 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4dnsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.555144 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" podUID="b3fe97d7-1fa0-42aa-b8df-66f25aa6ee60" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.561997 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"8eb9a6dd730c2073d51dc2b87dd09bd59115a5dc4e29c78bbecb77e78fc45209"} Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.584837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.591790 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.091747043 +0000 UTC m=+148.124831241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.596093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.620925 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.120900501 +0000 UTC m=+148.153984689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.630113 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.683665 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m7pv7" podStartSLOduration=7.683636871 podStartE2EDuration="7.683636871s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:32.583647656 +0000 UTC m=+147.616731854" watchObservedRunningTime="2026-02-21 06:49:32.683636871 +0000 UTC m=+147.716721069" Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.699891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.701827 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.201805815 +0000 UTC m=+148.234890013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.702023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.702432 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.202424834 +0000 UTC m=+148.235509032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.803696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.804363 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.304347799 +0000 UTC m=+148.337431997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.804634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.804940 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.304932936 +0000 UTC m=+148.338017134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:32 crc kubenswrapper[4820]: I0221 06:49:32.905665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:32 crc kubenswrapper[4820]: E0221 06:49:32.906050 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.406030786 +0000 UTC m=+148.439114984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.007864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.008246 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.508217439 +0000 UTC m=+148.541301637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.109036 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.109157 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.609139543 +0000 UTC m=+148.642223731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.109308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.109583 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.609574537 +0000 UTC m=+148.642658735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.210703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.210901 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.710874172 +0000 UTC m=+148.743958380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.211002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.211335 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.711326595 +0000 UTC m=+148.744410783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.311604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.311799 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.811770665 +0000 UTC m=+148.844854863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.311906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.312219 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.812205888 +0000 UTC m=+148.845290076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.349891 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:33 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:33 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:33 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.349986 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.413581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.413714 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.91369502 +0000 UTC m=+148.946779208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.413850 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.414144 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:33.914136263 +0000 UTC m=+148.947220461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.514629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.514819 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.014791579 +0000 UTC m=+149.047875777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.514910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.515250 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.015223362 +0000 UTC m=+149.048307560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.568553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7pv7" event={"ID":"1df285f9-7ae4-4fea-8817-0a7e5e851551","Type":"ContainerStarted","Data":"7356d0b1eaa35efd0f5db64ba47b152274dc4d1a1d829988c16aed6c5d2acefe"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.571082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" event={"ID":"021bee51-757d-4fcb-97b6-af9ad74d569c","Type":"ContainerStarted","Data":"4f7b07b2dcc928735d14be50607836c14b74ebde9973969595e23e8fe1bb36dd"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.571391 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.573494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"54ef5cc00634eaf23d9c876294b375f9a2440e00825a2d82195323bd652bd25e"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.573541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" event={"ID":"0e6916fe-cc6e-4d6e-89d0-a0d499ff028a","Type":"ContainerStarted","Data":"b53d2ab90968edb86cb1293b5f0a321255f5ed82ea60cf7f33b1cecf7326d6f3"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.574874 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerStarted","Data":"d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.577760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kxrb8" event={"ID":"8b5270e1-81d3-477a-96f9-b2cbc3090288","Type":"ContainerStarted","Data":"85617112a4d4ea68145daddd309b7480a5180d7eef69357164533db8c10391ac"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.578148 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.579171 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.579208 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.581284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" event={"ID":"fc6a5b86-a925-4f00-b0ed-19717e7e1f09","Type":"ContainerStarted","Data":"c7d2264cdc6ef73ccff94b871ba65c20844e7dfb8d336423ca7ed6fc1b537385"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"59514df78c99316b54ef1d4074c1cb4ff3529f4d26e0463762188af68dffa419"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sps4j" event={"ID":"177c9eb7-021d-4d7f-a044-8913469b4236","Type":"ContainerStarted","Data":"3b5c98b5d5c70244ec42ac5e37567b08212d2dd9fd4247a4b77b9a3d541fa966"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.585665 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.588022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" event={"ID":"228a9802-8837-425d-ab0f-72c79dbc4399","Type":"ContainerStarted","Data":"cc725151a862421d96ae05acaff9476fb868bff690d23458ce49f902077f31d6"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.590625 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" event={"ID":"8462a28b-a255-4ec7-9e85-cb98c6666e68","Type":"ContainerStarted","Data":"9b16f911a6a957a967446cfbf5c6ea2fc96992bba81ede127d18b8f442dc2429"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.590763 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.591818 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.593191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" event={"ID":"a3d20e62-3892-4e70-adad-754ac75dd1b9","Type":"ContainerStarted","Data":"4cdd61ac12c712c0524d25b2b8de59b3c6079f9b05a9b82135d32533f85e73f5"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.599506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"ab46a15714b5ce15e364bd8cc75cf44a9ee5d89f66a5b5ebfe6d8bc20e394dd8"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.599559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" event={"ID":"754cb6b5-90c5-4747-8ef0-28a7c6b02448","Type":"ContainerStarted","Data":"89006b00b6440cabf3b88c2b6c0225f227dc1ac672abee5f596dbd980663a035"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.600773 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h6cdl" podStartSLOduration=126.600758289 podStartE2EDuration="2m6.600758289s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.598887731 +0000 UTC m=+148.631971929" watchObservedRunningTime="2026-02-21 06:49:33.600758289 +0000 UTC m=+148.633842487" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.601136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" event={"ID":"3b64a6e2-e14a-4de0-8630-e617a55b0794","Type":"ContainerStarted","Data":"af6314d7ea27c7b3a4e8144c625a0c4a8949018e65622aa6a209e0d3a8700c6e"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.608785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fm6pk" event={"ID":"fc95478e-4574-4010-8833-5da4ec1987b3","Type":"ContainerStarted","Data":"6612b7ac98012272910d7d5a6a73fe2190842067b4aebead992ae2def82e662f"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.612374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" event={"ID":"c8bb35a2-6708-4267-bb44-d80ff0e0ccc8","Type":"ContainerStarted","Data":"7014e91500c417a828d3c717957a93a13e1d30d76a8161f8ed7815aaa79f7cdd"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.612648 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.615467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.615571 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.115535098 +0000 UTC m=+149.148619296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.615680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" event={"ID":"c7e86cdb-22d7-424c-a51e-61c1d7848655","Type":"ContainerStarted","Data":"e3b3788b1cfc79210c52d6900090330939103716741df826d89e6a6fc1c7526c"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.616888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.618018 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.117995653 +0000 UTC m=+149.151079851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.624855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ck2xk" event={"ID":"64747ec7-e06d-406d-8c6e-332b1cbe179f","Type":"ContainerStarted","Data":"92b83e5234915f54f8b3c1fcbb8589558df18926ca198a27371d834a143de4a6"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.627405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" event={"ID":"128520ce-9a27-454a-8394-efae24e83a7c","Type":"ContainerStarted","Data":"687382766b16f44a337c2f886e962776bbfbd10e7d5d0d3d4133f3ae54b0e6f3"} Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.632886 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.632942 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.644418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dnsn" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.644488 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n6xh6" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.665722 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2jrlk" podStartSLOduration=126.665698946 podStartE2EDuration="2m6.665698946s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.663392486 +0000 UTC m=+148.696476704" watchObservedRunningTime="2026-02-21 06:49:33.665698946 +0000 UTC m=+148.698783144" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.666246 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lt58x" podStartSLOduration=127.666226872 podStartE2EDuration="2m7.666226872s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.621860221 +0000 UTC m=+148.654944419" watchObservedRunningTime="2026-02-21 06:49:33.666226872 +0000 UTC m=+148.699311070" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.715331 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" podStartSLOduration=127.715308207 podStartE2EDuration="2m7.715308207s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.712576044 +0000 UTC m=+148.745660242" watchObservedRunningTime="2026-02-21 06:49:33.715308207 +0000 UTC m=+148.748392405" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.717894 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.718999 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.218973919 +0000 UTC m=+149.252058147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.741184 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2fm9m" podStartSLOduration=126.741163485 podStartE2EDuration="2m6.741163485s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.73607597 +0000 UTC m=+148.769160178" watchObservedRunningTime="2026-02-21 06:49:33.741163485 +0000 UTC m=+148.774247683" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.822268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.822858 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.322846393 +0000 UTC m=+149.355930591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.852850 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" podStartSLOduration=126.852827627 podStartE2EDuration="2m6.852827627s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.840375538 +0000 UTC m=+148.873459746" watchObservedRunningTime="2026-02-21 06:49:33.852827627 +0000 UTC m=+148.885911825" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.877636 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kxrb8" podStartSLOduration=127.877619411 podStartE2EDuration="2m7.877619411s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:33.875802716 +0000 UTC m=+148.908886904" watchObservedRunningTime="2026-02-21 06:49:33.877619411 +0000 UTC m=+148.910703619" Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.923925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.924051 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.424034305 +0000 UTC m=+149.457118503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:33 crc kubenswrapper[4820]: I0221 06:49:33.924285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:33 crc kubenswrapper[4820]: E0221 06:49:33.930054 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.430034469 +0000 UTC m=+149.463118667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.031208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.031407 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.531377145 +0000 UTC m=+149.564461343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.037613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.038253 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.538219734 +0000 UTC m=+149.571303932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.039074 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sps4j" podStartSLOduration=9.03906149 podStartE2EDuration="9.03906149s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.038856133 +0000 UTC m=+149.071940331" watchObservedRunningTime="2026-02-21 06:49:34.03906149 +0000 UTC m=+149.072145688" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.138722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.139066 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.639050665 +0000 UTC m=+149.672134863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.181879 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bm22t" podStartSLOduration=127.181854229 podStartE2EDuration="2m7.181854229s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.142323855 +0000 UTC m=+149.175408053" watchObservedRunningTime="2026-02-21 06:49:34.181854229 +0000 UTC m=+149.214938427" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.182739 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zl5zd" podStartSLOduration=127.182730435 podStartE2EDuration="2m7.182730435s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.178591559 +0000 UTC m=+149.211675757" watchObservedRunningTime="2026-02-21 06:49:34.182730435 +0000 UTC m=+149.215814633" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.208006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9qjcj" podStartSLOduration=127.207984645 podStartE2EDuration="2m7.207984645s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.206662934 +0000 UTC m=+149.239747142" watchObservedRunningTime="2026-02-21 06:49:34.207984645 +0000 UTC m=+149.241068843" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.235068 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" podStartSLOduration=128.23505109 podStartE2EDuration="2m8.23505109s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.23344512 +0000 UTC m=+149.266529318" watchObservedRunningTime="2026-02-21 06:49:34.23505109 +0000 UTC m=+149.268135288" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.239952 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.240231 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.740221477 +0000 UTC m=+149.773305675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.261071 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6jb4w" podStartSLOduration=127.261051921 podStartE2EDuration="2m7.261051921s" podCreationTimestamp="2026-02-21 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:34.258113352 +0000 UTC m=+149.291197550" watchObservedRunningTime="2026-02-21 06:49:34.261051921 +0000 UTC m=+149.294136119" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.342708 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.343058 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.843044309 +0000 UTC m=+149.876128507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.350449 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:34 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:34 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:34 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.350673 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.444223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.444776 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:34.944665024 +0000 UTC m=+149.977749222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.545513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.545725 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.045697162 +0000 UTC m=+150.078781360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.545953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.546283 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.04627669 +0000 UTC m=+150.079360878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.587550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8n56" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"7940b11b8dba4e357ae885f3a3afc0427bbc4e2d9e9987be05626b3ccc6b48d8"} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"9b80189fe1a1351f24d6e3b9938619406bfe72ad9b115cb545588ece5f5703dc"} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632941 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k58x6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.632975 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.633448 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.633488 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.661416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.161399936 +0000 UTC m=+150.194484134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661892 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.661939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.662854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.663072 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.163062717 +0000 UTC m=+150.196146915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.679130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.679660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.684673 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.710482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.719564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.729455 4820 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.763726 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.763953 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.263928199 +0000 UTC m=+150.297012397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.764171 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.764496 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.264488347 +0000 UTC m=+150.297572545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-566bt" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.811647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.864664 4820 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-21T06:49:34.729481331Z","Handler":null,"Name":""} Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.866167 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:34 crc kubenswrapper[4820]: E0221 06:49:34.866524 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-21 06:49:35.366510425 +0000 UTC m=+150.399594623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.873835 4820 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.874053 4820 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.969856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.979018 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 06:49:34 crc kubenswrapper[4820]: I0221 06:49:34.979075 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.013920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-566bt\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.014208 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926 WatchSource:0}: Error finding container 9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926: Status 404 returned error can't find the container with id 9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.071022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.084473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.157814 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394 WatchSource:0}: Error finding container 003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394: Status 404 returned error can't find the container with id 003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.175951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.185222 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.186107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.187976 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.192594 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.273620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.273954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.274001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: W0221 06:49:35.288132 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4 WatchSource:0}: Error finding container a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4: Status 404 returned error can't find the container with id a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.353473 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:35 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:35 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:35 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.353537 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.375281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.376120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.376348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.395486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.398090 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.399599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"certified-operators-dtbbw\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.403003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.425541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.462443 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.476136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.498900 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.577594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.578598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.580538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.585758 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.586717 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.596006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.606914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"community-operators-gt7zt\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.647213 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerID="d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae" exitCode=0 Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.647454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerDied","Data":"d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.650974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"08ab4f15aee047c9dfe96d9df48e491c33e5254834a87861b2d7297fa2e83b3e"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.651009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9c2b9d563c6fe4130bbf0590596574ec8450926c54f8d3c329b4a2cb89fea926"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.652520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"22b90e97b32a7de4cd0ac2754111b813ee1bd717dfe1d8355254e7e0e59de193"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.652616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"003804c1ebaf409969326a6cff478a03eab2e982377954ee06209858d09a9394"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.654641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerStarted","Data":"b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.678477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.686657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" event={"ID":"b7322fd9-681a-4d9a-83ac-9e74308f8747","Type":"ContainerStarted","Data":"cc53a98b7dd17aaa34ec2e68f5b3bb8b18e65633865661349722679980e38577"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694092 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fc5ed7dacecf277fa20a79668d542cbc147476a4b104a56ffc3afe3e30c60646"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a412f37e96f8a3ded5a0bb3faeac7548e4b2b9694a1fbc475aaef8acf07dadd4"} Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.694412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.706512 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.741321 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qhnw8" podStartSLOduration=10.741299061 podStartE2EDuration="10.741299061s" podCreationTimestamp="2026-02-21 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:35.734842445 +0000 UTC m=+150.767926643" watchObservedRunningTime="2026-02-21 06:49:35.741299061 +0000 UTC m=+150.774383259" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.750474 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.754689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.779992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.787605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.787769 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.791844 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.792917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.802742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.810610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"certified-operators-j6kgh\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.906523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.986017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.986948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:35 crc kubenswrapper[4820]: I0221 06:49:35.987004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.088392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.090166 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.090181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.096893 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.107536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"community-operators-wfwch\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.115497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.151403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:49:36 crc kubenswrapper[4820]: W0221 06:49:36.162991 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dd96409_63d5_46a5_a9cb_a8e59f7fcce8.slice/crio-a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237 WatchSource:0}: Error finding container a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237: Status 404 returned error can't find the container with id a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.314515 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.346621 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:36 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:36 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:36 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.346679 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:36 crc kubenswrapper[4820]: W0221 06:49:36.383836 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad8f1e2_40cf_4c0b_aa35_d737387eca67.slice/crio-a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579 WatchSource:0}: Error finding container a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579: Status 404 returned error can't find the container with id a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699147 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.699482 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerStarted","Data":"a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701050 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701376 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.701497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703463 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.703588 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"04fd41dbab4d8a603151ac33844cbba8ff658b873d854bbf23d1ef0e3e50dc39"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.705016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerStarted","Data":"8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.705127 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706181 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" exitCode=0 Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.706770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"1186b29bef767e21ec1c625c6cc6253779166154a0a774141ac1f83ba9af24e6"} Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.811031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" podStartSLOduration=130.811013225 podStartE2EDuration="2m10.811013225s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:36.810298864 +0000 UTC m=+151.843383082" watchObservedRunningTime="2026-02-21 06:49:36.811013225 +0000 UTC m=+151.844097423" Feb 21 06:49:36 crc kubenswrapper[4820]: I0221 06:49:36.952158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100677 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.100866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") pod \"0b009b00-dfa6-40ba-b629-608fc71dc429\" (UID: \"0b009b00-dfa6-40ba-b629-608fc71dc429\") " Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.101553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.105607 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd" (OuterVolumeSpecName: "kube-api-access-hmnkd") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "kube-api-access-hmnkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.106336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b009b00-dfa6-40ba-b629-608fc71dc429" (UID: "0b009b00-dfa6-40ba-b629-608fc71dc429"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.183815 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: E0221 06:49:37.184028 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184162 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184288 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" containerName="collect-profiles" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.184964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.186595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.197126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203216 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b009b00-dfa6-40ba-b629-608fc71dc429-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203298 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmnkd\" (UniqueName: \"kubernetes.io/projected/0b009b00-dfa6-40ba-b629-608fc71dc429-kube-api-access-hmnkd\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.203312 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b009b00-dfa6-40ba-b629-608fc71dc429-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.263788 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vh8c8" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.304326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.347965 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:37 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:37 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:37 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.348345 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.408960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.409084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.409103 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.411142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.421728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.436912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"redhat-marketplace-wfq7z\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.510252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.583553 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.590534 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.596269 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.712852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.712930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.713155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725078 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725448 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9" event={"ID":"0b009b00-dfa6-40ba-b629-608fc71dc429","Type":"ContainerDied","Data":"88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2"} Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.725471 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88715bb258d3aa108b4b19be2aa570b41fc0e79301b3a41e96839d1839127be2" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.751959 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:49:37 crc kubenswrapper[4820]: W0221 06:49:37.762295 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bc411a_7f2e_4a7c_8a27_d758d4716f0e.slice/crio-0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955 WatchSource:0}: Error finding container 0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955: Status 404 returned error can't find the container with id 0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955 Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.814534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.815076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.815709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.843129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"redhat-marketplace-fwm8t\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.912958 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.933650 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.933747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.936166 4820 patch_prober.go:28] interesting pod/console-f9d7485db-cgbzf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 21 06:49:37 crc kubenswrapper[4820]: I0221 06:49:37.936209 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cgbzf" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.176132 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.177556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.188046 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.249171 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.265714 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.266460 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.268252 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.268582 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 21 06:49:38 crc kubenswrapper[4820]: W0221 06:49:38.279301 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod328474dd_edf9_4d6b_b9d9_50f591176ce1.slice/crio-f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad WatchSource:0}: Error finding container f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad: Status 404 returned error can't find the container with id f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.287637 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.342757 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.346209 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:38 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:38 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:38 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.346338 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.423723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.423764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.485330 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525475 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.525793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.547859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.584056 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.585567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.599303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.601578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.605107 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694536 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694600 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694799 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-kxrb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.694869 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kxrb8" podUID="8b5270e1-81d3-477a-96f9-b2cbc3090288" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.727851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739647 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" exitCode=0 Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739697 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.739720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745352 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" exitCode=0 Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.745589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad"} Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.773435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nnhcf" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.834633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.836635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.895295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"redhat-operators-zcn45\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:38 crc kubenswrapper[4820]: I0221 06:49:38.926646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.001454 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.002803 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.030172 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.048039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.138916 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.231601 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.240131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.241043 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.241175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: W0221 06:49:39.245147 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04595c48_2a70_4760_8e24_5266735b9e82.slice/crio-85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445 WatchSource:0}: Error finding container 85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445: Status 404 returned error can't find the container with id 85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.259491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"redhat-operators-568r2\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.326734 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.346847 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:39 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:39 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:39 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.347083 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.542901 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.753882 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" exitCode=0 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.753987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.754050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.756004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerStarted","Data":"cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.756282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerStarted","Data":"289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764408 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" exitCode=0 Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.764644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerStarted","Data":"a45f177e1207be3c08153b6e35e267a2cf4dd2c4d9944405c0f459a97610a520"} Feb 21 06:49:39 crc kubenswrapper[4820]: I0221 06:49:39.786692 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.786673937 podStartE2EDuration="1.786673937s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:49:39.785636255 +0000 UTC m=+154.818720463" watchObservedRunningTime="2026-02-21 06:49:39.786673937 +0000 UTC m=+154.819758145" Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.346505 4820 patch_prober.go:28] interesting pod/router-default-5444994796-q9pg5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 21 06:49:40 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 21 06:49:40 crc kubenswrapper[4820]: [+]process-running ok Feb 21 06:49:40 crc kubenswrapper[4820]: healthz check failed Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.346569 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q9pg5" podUID="23e5da7c-be56-4259-ab49-bf8ad50831fe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.821151 4820 generic.go:334] "Generic (PLEG): container finished" podID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerID="cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f" exitCode=0 Feb 21 06:49:40 crc kubenswrapper[4820]: I0221 06:49:40.821229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerDied","Data":"cc8e75f91419dd82bb896e1b408dbb84cd5bfe72d98d985cd4ed2107d1595d4f"} Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.137803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.138638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.146838 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.147292 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.150763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.276932 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.277030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.351443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.356184 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-q9pg5" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.378955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.379027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.379052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.417034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.478080 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:41 crc kubenswrapper[4820]: I0221 06:49:41.861061 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.124290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.193129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") pod \"42b7cabf-7765-4789-90b7-e8dabb197a7e\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.193229 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") pod \"42b7cabf-7765-4789-90b7-e8dabb197a7e\" (UID: \"42b7cabf-7765-4789-90b7-e8dabb197a7e\") " Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.205291 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42b7cabf-7765-4789-90b7-e8dabb197a7e" (UID: "42b7cabf-7765-4789-90b7-e8dabb197a7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.207347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42b7cabf-7765-4789-90b7-e8dabb197a7e" (UID: "42b7cabf-7765-4789-90b7-e8dabb197a7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.300844 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b7cabf-7765-4789-90b7-e8dabb197a7e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.300871 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b7cabf-7765-4789-90b7-e8dabb197a7e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.848445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerStarted","Data":"aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175"} Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42b7cabf-7765-4789-90b7-e8dabb197a7e","Type":"ContainerDied","Data":"289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d"} Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850374 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289f2fd2006b7edcffe3d65b5e2dac2457318781b98243f661da0ba8b19cf53d" Feb 21 06:49:42 crc kubenswrapper[4820]: I0221 06:49:42.850422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.520571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sps4j" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.816841 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.817225 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.866500 4820 generic.go:334] "Generic (PLEG): container finished" podID="dcce7871-a63c-4991-b931-4ab94a014424" containerID="7a77cd6f06b486924607882b7871c5c15d50eaacfd23fbd83e1dcdeb521fd47b" exitCode=0 Feb 21 06:49:43 crc kubenswrapper[4820]: I0221 06:49:43.866564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerDied","Data":"7a77cd6f06b486924607882b7871c5c15d50eaacfd23fbd83e1dcdeb521fd47b"} Feb 21 06:49:47 crc kubenswrapper[4820]: I0221 06:49:47.940125 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:47 crc kubenswrapper[4820]: I0221 06:49:47.945866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:49:48 crc kubenswrapper[4820]: I0221 06:49:48.697893 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kxrb8" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.638689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.660224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4537dd3-6e3b-481a-9f90-668020b5558b-metrics-certs\") pod \"network-metrics-daemon-bt6wj\" (UID: \"a4537dd3-6e3b-481a-9f90-668020b5558b\") " pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:49 crc kubenswrapper[4820]: I0221 06:49:49.816674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bt6wj" Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.961857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcce7871-a63c-4991-b931-4ab94a014424","Type":"ContainerDied","Data":"aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175"} Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.962157 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafe0dd8e20a17f081effbd54daabbca3eb4fe66f8c74d667f591877fb737175" Feb 21 06:49:51 crc kubenswrapper[4820]: I0221 06:49:51.975524 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") pod \"dcce7871-a63c-4991-b931-4ab94a014424\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") pod \"dcce7871-a63c-4991-b931-4ab94a014424\" (UID: \"dcce7871-a63c-4991-b931-4ab94a014424\") " Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.066973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcce7871-a63c-4991-b931-4ab94a014424" (UID: "dcce7871-a63c-4991-b931-4ab94a014424"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.067120 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcce7871-a63c-4991-b931-4ab94a014424-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.074915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcce7871-a63c-4991-b931-4ab94a014424" (UID: "dcce7871-a63c-4991-b931-4ab94a014424"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.168231 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcce7871-a63c-4991-b931-4ab94a014424-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:49:52 crc kubenswrapper[4820]: I0221 06:49:52.968590 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 21 06:49:53 crc kubenswrapper[4820]: I0221 06:49:53.996838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bt6wj"] Feb 21 06:49:55 crc kubenswrapper[4820]: I0221 06:49:55.185465 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:50:02 crc kubenswrapper[4820]: I0221 06:50:02.049035 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"cfd19c96c78f13114fafe6e2f8d22f644d978e4f44d89e25f82eaeb6ebd0e9a7"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.063893 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.066098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.069504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.071223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.086414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.089147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} Feb 21 06:50:05 crc kubenswrapper[4820]: I0221 06:50:05.097717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"898f87566cf619682b2563278404e107d6e21fdef12135bdea44f107415f9ea9"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.104161 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.104232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.106523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bt6wj" event={"ID":"a4537dd3-6e3b-481a-9f90-668020b5558b","Type":"ContainerStarted","Data":"f48fac4471614c75906d1467afec1707c20e35aa6c7b0ef5eb08a48d0d219955"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.108634 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.108670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.110893 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.110949 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.112515 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.112542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.117905 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.117964 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.127689 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.128114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.132011 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.132066 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.135133 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e" exitCode=0 Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.135154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e"} Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.216702 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bt6wj" podStartSLOduration=160.216678888 podStartE2EDuration="2m40.216678888s" podCreationTimestamp="2026-02-21 06:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:06.212469529 +0000 UTC m=+181.245553767" watchObservedRunningTime="2026-02-21 06:50:06.216678888 +0000 UTC m=+181.249763086" Feb 21 06:50:06 crc kubenswrapper[4820]: I0221 06:50:06.424681 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:08 crc kubenswrapper[4820]: I0221 06:50:08.420354 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9w9rw" Feb 21 06:50:10 crc kubenswrapper[4820]: I0221 06:50:10.159042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerStarted","Data":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} Feb 21 06:50:11 crc kubenswrapper[4820]: I0221 06:50:11.183115 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dtbbw" podStartSLOduration=3.680531012 podStartE2EDuration="36.183100119s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.707508992 +0000 UTC m=+151.740593190" lastFinishedPulling="2026-02-21 06:50:09.210078099 +0000 UTC m=+184.243162297" observedRunningTime="2026-02-21 06:50:11.180012185 +0000 UTC m=+186.213096383" watchObservedRunningTime="2026-02-21 06:50:11.183100119 +0000 UTC m=+186.216184317" Feb 21 06:50:12 crc kubenswrapper[4820]: I0221 06:50:12.170453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerStarted","Data":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} Feb 21 06:50:12 crc kubenswrapper[4820]: I0221 06:50:12.187451 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfq7z" podStartSLOduration=2.769327039 podStartE2EDuration="35.187435182s" podCreationTimestamp="2026-02-21 06:49:37 +0000 UTC" firstStartedPulling="2026-02-21 06:49:38.741127739 +0000 UTC m=+153.774211937" lastFinishedPulling="2026-02-21 06:50:11.159235862 +0000 UTC m=+186.192320080" observedRunningTime="2026-02-21 06:50:12.183627707 +0000 UTC m=+187.216711905" watchObservedRunningTime="2026-02-21 06:50:12.187435182 +0000 UTC m=+187.220519380" Feb 21 06:50:13 crc kubenswrapper[4820]: I0221 06:50:13.816261 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:50:13 crc kubenswrapper[4820]: I0221 06:50:13.817160 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.189190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerStarted","Data":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.198793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerStarted","Data":"11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f"} Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.244940 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-568r2" podStartSLOduration=2.700989918 podStartE2EDuration="36.244921415s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="2026-02-21 06:49:39.767165823 +0000 UTC m=+154.800250021" lastFinishedPulling="2026-02-21 06:50:13.31109732 +0000 UTC m=+188.344181518" observedRunningTime="2026-02-21 06:50:14.222205984 +0000 UTC m=+189.255290182" watchObservedRunningTime="2026-02-21 06:50:14.244921415 +0000 UTC m=+189.278005613" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.718962 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 21 06:50:14 crc kubenswrapper[4820]: I0221 06:50:14.738222 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfwch" podStartSLOduration=2.412120436 podStartE2EDuration="39.738206261s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.702571363 +0000 UTC m=+151.735655561" lastFinishedPulling="2026-02-21 06:50:14.028657188 +0000 UTC m=+189.061741386" observedRunningTime="2026-02-21 06:50:14.242223323 +0000 UTC m=+189.275307521" watchObservedRunningTime="2026-02-21 06:50:14.738206261 +0000 UTC m=+189.771290459" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.207607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerStarted","Data":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.209716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerStarted","Data":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.215186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerStarted","Data":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.217420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerStarted","Data":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.233356 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gt7zt" podStartSLOduration=2.845149547 podStartE2EDuration="40.233339474s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.704334086 +0000 UTC m=+151.737418284" lastFinishedPulling="2026-02-21 06:50:14.092524013 +0000 UTC m=+189.125608211" observedRunningTime="2026-02-21 06:50:15.228742964 +0000 UTC m=+190.261827162" watchObservedRunningTime="2026-02-21 06:50:15.233339474 +0000 UTC m=+190.266423672" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.253424 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fwm8t" podStartSLOduration=2.968959189 podStartE2EDuration="38.253403024s" podCreationTimestamp="2026-02-21 06:49:37 +0000 UTC" firstStartedPulling="2026-02-21 06:49:38.749408211 +0000 UTC m=+153.782492409" lastFinishedPulling="2026-02-21 06:50:14.033852046 +0000 UTC m=+189.066936244" observedRunningTime="2026-02-21 06:50:15.249889268 +0000 UTC m=+190.282973476" watchObservedRunningTime="2026-02-21 06:50:15.253403024 +0000 UTC m=+190.286487222" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.269214 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j6kgh" podStartSLOduration=2.773873706 podStartE2EDuration="40.269194486s" podCreationTimestamp="2026-02-21 06:49:35 +0000 UTC" firstStartedPulling="2026-02-21 06:49:36.700817119 +0000 UTC m=+151.733901317" lastFinishedPulling="2026-02-21 06:50:14.196137899 +0000 UTC m=+189.229222097" observedRunningTime="2026-02-21 06:50:15.264339908 +0000 UTC m=+190.297424116" watchObservedRunningTime="2026-02-21 06:50:15.269194486 +0000 UTC m=+190.302278684" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.288460 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcn45" podStartSLOduration=2.938700858 podStartE2EDuration="37.288442652s" podCreationTimestamp="2026-02-21 06:49:38 +0000 UTC" firstStartedPulling="2026-02-21 06:49:39.756758045 +0000 UTC m=+154.789842243" lastFinishedPulling="2026-02-21 06:50:14.106499849 +0000 UTC m=+189.139584037" observedRunningTime="2026-02-21 06:50:15.285046669 +0000 UTC m=+190.318130867" watchObservedRunningTime="2026-02-21 06:50:15.288442652 +0000 UTC m=+190.321526860" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.500021 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.500068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.687699 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.755552 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.755592 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.906973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:15 crc kubenswrapper[4820]: I0221 06:50:15.907026 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.115934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.115978 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.273749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.800190 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gt7zt" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:16 crc kubenswrapper[4820]: > Feb 21 06:50:16 crc kubenswrapper[4820]: I0221 06:50:16.940096 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-j6kgh" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:16 crc kubenswrapper[4820]: > Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.152385 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wfwch" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:17 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:17 crc kubenswrapper[4820]: > Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.511320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.512183 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.549039 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.913446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.913516 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.940164 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:17 crc kubenswrapper[4820]: E0221 06:50:17.949261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949296 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: E0221 06:50:17.949321 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949330 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949851 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcce7871-a63c-4991-b931-4ab94a014424" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.949887 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b7cabf-7765-4789-90b7-e8dabb197a7e" containerName="pruner" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.950639 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.956060 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.956163 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.968756 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:17 crc kubenswrapper[4820]: I0221 06:50:17.992994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.085026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.085174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.186606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.209441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.275298 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.288150 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.710465 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.928480 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:18 crc kubenswrapper[4820]: I0221 06:50:18.928529 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.240467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerStarted","Data":"b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920"} Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.240775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerStarted","Data":"3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b"} Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.254991 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.254971416 podStartE2EDuration="2.254971416s" podCreationTimestamp="2026-02-21 06:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:19.254788671 +0000 UTC m=+194.287872869" watchObservedRunningTime="2026-02-21 06:50:19.254971416 +0000 UTC m=+194.288055614" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.328081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.328145 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:19 crc kubenswrapper[4820]: I0221 06:50:19.975797 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcn45" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:19 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:19 crc kubenswrapper[4820]: > Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.246690 4820 generic.go:334] "Generic (PLEG): container finished" podID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerID="b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920" exitCode=0 Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.246764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerDied","Data":"b525acd930296f7cd7932d46894befc2e5a4f56236f3ccce00946c5e57d66920"} Feb 21 06:50:20 crc kubenswrapper[4820]: I0221 06:50:20.365772 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-568r2" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" probeResult="failure" output=< Feb 21 06:50:20 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 06:50:20 crc kubenswrapper[4820]: > Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.483993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.526707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") pod \"155e5f64-211d-4b89-b8dc-48f3edf80891\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.526793 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") pod \"155e5f64-211d-4b89-b8dc-48f3edf80891\" (UID: \"155e5f64-211d-4b89-b8dc-48f3edf80891\") " Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.527262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "155e5f64-211d-4b89-b8dc-48f3edf80891" (UID: "155e5f64-211d-4b89-b8dc-48f3edf80891"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.535085 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "155e5f64-211d-4b89-b8dc-48f3edf80891" (UID: "155e5f64-211d-4b89-b8dc-48f3edf80891"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.628742 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/155e5f64-211d-4b89-b8dc-48f3edf80891-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:21 crc kubenswrapper[4820]: I0221 06:50:21.628787 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/155e5f64-211d-4b89-b8dc-48f3edf80891-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"155e5f64-211d-4b89-b8dc-48f3edf80891","Type":"ContainerDied","Data":"3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b"} Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259515 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb169ae922bc05641cc6b9ccc5dce61a3e600355451aaf7bc6da01715a9061b" Feb 21 06:50:22 crc kubenswrapper[4820]: I0221 06:50:22.259812 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535177 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:24 crc kubenswrapper[4820]: E0221 06:50:24.535716 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535732 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.535863 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="155e5f64-211d-4b89-b8dc-48f3edf80891" containerName="pruner" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.536324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.544399 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.544841 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.551047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669060 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.669480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.771958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.787830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:24 crc kubenswrapper[4820]: I0221 06:50:24.872754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.070925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 21 06:50:25 crc kubenswrapper[4820]: W0221 06:50:25.075801 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6ac3e04_b33d_46c2_8935_502b7c8d4bfc.slice/crio-ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a WatchSource:0}: Error finding container ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a: Status 404 returned error can't find the container with id ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.275731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerStarted","Data":"ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a"} Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.809189 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.848950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.949657 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:25 crc kubenswrapper[4820]: I0221 06:50:25.999921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.162472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.203387 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.284114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerStarted","Data":"fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902"} Feb 21 06:50:26 crc kubenswrapper[4820]: I0221 06:50:26.305490 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.305468798 podStartE2EDuration="2.305468798s" podCreationTimestamp="2026-02-21 06:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:26.302347039 +0000 UTC m=+201.335431257" watchObservedRunningTime="2026-02-21 06:50:26.305468798 +0000 UTC m=+201.338552996" Feb 21 06:50:27 crc kubenswrapper[4820]: I0221 06:50:27.953575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.145371 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.145637 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wfwch" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" containerID="cri-o://11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" gracePeriod=2 Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.295183 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerID="11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" exitCode=0 Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.295253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f"} Feb 21 06:50:28 crc kubenswrapper[4820]: I0221 06:50:28.981106 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.019463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.270851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfwch" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfwch" event={"ID":"4ad8f1e2-40cf-4c0b-aa35-d737387eca67","Type":"ContainerDied","Data":"a2ae9f307855ab18cee074942ddc0bb885feb467bda6834f2adedf2f6ba48579"} Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.302785 4820 scope.go:117] "RemoveContainer" containerID="11d9aa8adc2d52eb3d37fd794491ef7312641aeaa02431dcdb7b8157f4bf8b0f" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.326374 4820 scope.go:117] "RemoveContainer" containerID="c0facf7a97d78362dd30b0aa85074bfc5ee3fe6f4603ba8e654f5fe8d83bb24e" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.328568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") pod \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\" (UID: \"4ad8f1e2-40cf-4c0b-aa35-d737387eca67\") " Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.329739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities" (OuterVolumeSpecName: "utilities") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.335876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf" (OuterVolumeSpecName: "kube-api-access-s7zsf") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "kube-api-access-s7zsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.361577 4820 scope.go:117] "RemoveContainer" containerID="2e6da9bd9d95bf2fdd3f87da878f483c776c8f768d2149380d2d2bef1ce92197" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.375560 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.417694 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.433353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7zsf\" (UniqueName: \"kubernetes.io/projected/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-kube-api-access-s7zsf\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:29 crc kubenswrapper[4820]: I0221 06:50:29.433408 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.128823 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.129136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j6kgh" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" containerID="cri-o://af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" gracePeriod=2 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.156224 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.156662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fwm8t" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" containerID="cri-o://d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" gracePeriod=2 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.376538 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ad8f1e2-40cf-4c0b-aa35-d737387eca67" (UID: "4ad8f1e2-40cf-4c0b-aa35-d737387eca67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.426654 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad8f1e2-40cf-4c0b-aa35-d737387eca67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.428099 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.431120 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wfwch"] Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.453436 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" containerID="cri-o://703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" gracePeriod=15 Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.704525 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" path="/var/lib/kubelet/pods/4ad8f1e2-40cf-4c0b-aa35-d737387eca67/volumes" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.826843 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.870841 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.877364 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931436 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931546 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.931575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") pod \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\" (UID: \"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8\") " Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.932618 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities" (OuterVolumeSpecName: "utilities") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.936564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw" (OuterVolumeSpecName: "kube-api-access-nw6fw") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "kube-api-access-nw6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:31 crc kubenswrapper[4820]: I0221 06:50:31.978884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" (UID: "0dd96409-63d5-46a5-a9cb-a8e59f7fcce8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.032834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033167 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033485 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033508 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033544 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033820 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") pod \"328474dd-edf9-4d6b-b9d9-50f591176ce1\" (UID: \"328474dd-edf9-4d6b-b9d9-50f591176ce1\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.033861 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") pod \"a2b27a90-ce04-40f3-9656-148cca792c55\" (UID: \"a2b27a90-ce04-40f3-9656-148cca792c55\") " Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034659 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2b27a90-ce04-40f3-9656-148cca792c55-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034765 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6fw\" (UniqueName: \"kubernetes.io/projected/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-kube-api-access-nw6fw\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034780 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034793 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.034956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035050 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.035725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.036090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities" (OuterVolumeSpecName: "utilities") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.038220 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.038678 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.039140 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.039886 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v" (OuterVolumeSpecName: "kube-api-access-4j82v") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "kube-api-access-4j82v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040147 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.040415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a2b27a90-ce04-40f3-9656-148cca792c55" (UID: "a2b27a90-ce04-40f3-9656-148cca792c55"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.041827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k" (OuterVolumeSpecName: "kube-api-access-kcr8k") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "kube-api-access-kcr8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.068762 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328474dd-edf9-4d6b-b9d9-50f591176ce1" (UID: "328474dd-edf9-4d6b-b9d9-50f591176ce1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135915 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135948 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.135995 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136007 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136039 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136049 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcr8k\" (UniqueName: \"kubernetes.io/projected/328474dd-edf9-4d6b-b9d9-50f591176ce1-kube-api-access-kcr8k\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136056 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136065 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136074 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136082 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136102 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136111 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136120 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328474dd-edf9-4d6b-b9d9-50f591176ce1-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j82v\" (UniqueName: \"kubernetes.io/projected/a2b27a90-ce04-40f3-9656-148cca792c55-kube-api-access-4j82v\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136142 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.136151 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b27a90-ce04-40f3-9656-148cca792c55-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384723 4820 generic.go:334] "Generic (PLEG): container finished" podID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.384795 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwm8t" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.385055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwm8t" event={"ID":"328474dd-edf9-4d6b-b9d9-50f591176ce1","Type":"ContainerDied","Data":"f8a5b6747be5d1dc78ec352a1da0ebd534f07dbe8949b884afe0b97aa6675dad"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.385127 4820 scope.go:117] "RemoveContainer" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.387996 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2b27a90-ce04-40f3-9656-148cca792c55" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerDied","Data":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" event={"ID":"a2b27a90-ce04-40f3-9656-148cca792c55","Type":"ContainerDied","Data":"163e0224df79387e94d53de67771865cc2f448fe55307754f0c2f2e2575f77bd"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.388099 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f6j4c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390557 4820 generic.go:334] "Generic (PLEG): container finished" podID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" exitCode=0 Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6kgh" event={"ID":"0dd96409-63d5-46a5-a9cb-a8e59f7fcce8","Type":"ContainerDied","Data":"a1cf12a01af1b785eb3cc4bfef081e961870a39c601e9949c2b4118d5ac92237"} Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.390647 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6kgh" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.404319 4820 scope.go:117] "RemoveContainer" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.428770 4820 scope.go:117] "RemoveContainer" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.431329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.434136 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f6j4c"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.446605 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.454327 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j6kgh"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.463135 4820 scope.go:117] "RemoveContainer" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.463455 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.464133 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": container with ID starting with d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b not found: ID does not exist" containerID="d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.464178 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b"} err="failed to get container status \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": rpc error: code = NotFound desc = could not find container \"d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b\": container with ID starting with d90a180b1eb3c4ab57c27780bc7e7b7d231ebb14d751d1f6bcb71e12cb7ea21b not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.464290 4820 scope.go:117] "RemoveContainer" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.465194 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": container with ID starting with 0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e not found: ID does not exist" containerID="0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465372 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e"} err="failed to get container status \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": rpc error: code = NotFound desc = could not find container \"0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e\": container with ID starting with 0091c4795ba17f1fef486ae9a4f39361a6a4a7431146905a610209f172b4bc9e not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465483 4820 scope.go:117] "RemoveContainer" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.465883 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": container with ID starting with f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c not found: ID does not exist" containerID="f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465925 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c"} err="failed to get container status \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": rpc error: code = NotFound desc = could not find container \"f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c\": container with ID starting with f888351c871063e8d1914ba418d7d0e49b642a71c56788d83a2a3089e604281c not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.465952 4820 scope.go:117] "RemoveContainer" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.468128 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwm8t"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.488813 4820 scope.go:117] "RemoveContainer" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.489330 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": container with ID starting with 703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab not found: ID does not exist" containerID="703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.489358 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab"} err="failed to get container status \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": rpc error: code = NotFound desc = could not find container \"703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab\": container with ID starting with 703e2540684916ac3fdaa831c7a54c1c87e65c85c2d5a6b38f13c25b409a1fab not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.489381 4820 scope.go:117] "RemoveContainer" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.504822 4820 scope.go:117] "RemoveContainer" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.520710 4820 scope.go:117] "RemoveContainer" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534356 4820 scope.go:117] "RemoveContainer" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.534749 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": container with ID starting with af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8 not found: ID does not exist" containerID="af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534788 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8"} err="failed to get container status \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": rpc error: code = NotFound desc = could not find container \"af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8\": container with ID starting with af39d66bdff02832f31acaec0aa327ca483bafed7e071b23590644cea8b39be8 not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.534812 4820 scope.go:117] "RemoveContainer" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.535065 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": container with ID starting with f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4 not found: ID does not exist" containerID="f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535098 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4"} err="failed to get container status \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": rpc error: code = NotFound desc = could not find container \"f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4\": container with ID starting with f09b667602a93161a580a21b5da73823dc1b1c4f4603b17e80d9c5c73418b8f4 not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535121 4820 scope.go:117] "RemoveContainer" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: E0221 06:50:32.535491 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": container with ID starting with 38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff not found: ID does not exist" containerID="38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.535519 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff"} err="failed to get container status \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": rpc error: code = NotFound desc = could not find container \"38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff\": container with ID starting with 38b054a6a6f9b75ebc2675bd5d2f0a952d004563865bc516a9c58068b55519ff not found: ID does not exist" Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.754408 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:32 crc kubenswrapper[4820]: I0221 06:50:32.754660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-568r2" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" containerID="cri-o://c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" gracePeriod=2 Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.085658 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.256805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") pod \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\" (UID: \"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6\") " Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.257746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities" (OuterVolumeSpecName: "utilities") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.267457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8" (OuterVolumeSpecName: "kube-api-access-nzzp8") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "kube-api-access-nzzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.357750 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzp8\" (UniqueName: \"kubernetes.io/projected/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-kube-api-access-nzzp8\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.357782 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398491 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" exitCode=0 Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-568r2" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-568r2" event={"ID":"8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6","Type":"ContainerDied","Data":"a45f177e1207be3c08153b6e35e267a2cf4dd2c4d9944405c0f459a97610a520"} Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.398735 4820 scope.go:117] "RemoveContainer" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.414309 4820 scope.go:117] "RemoveContainer" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.430854 4820 scope.go:117] "RemoveContainer" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445020 4820 scope.go:117] "RemoveContainer" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.445513 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": container with ID starting with c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf not found: ID does not exist" containerID="c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445554 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf"} err="failed to get container status \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": rpc error: code = NotFound desc = could not find container \"c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf\": container with ID starting with c4398e9143e088c5611417616fdb95e65217966afcbb69e567730f2230bbc8bf not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445581 4820 scope.go:117] "RemoveContainer" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.445949 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": container with ID starting with 3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193 not found: ID does not exist" containerID="3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.445982 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193"} err="failed to get container status \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": rpc error: code = NotFound desc = could not find container \"3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193\": container with ID starting with 3188901daa3efee78f15764c05c91b9f28b60cd8a86f1f4c49ae1db5cb31c193 not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.446004 4820 scope.go:117] "RemoveContainer" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.446402 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": container with ID starting with c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83 not found: ID does not exist" containerID="c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.446429 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83"} err="failed to get container status \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": rpc error: code = NotFound desc = could not find container \"c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83\": container with ID starting with c2b78aa2e410711188cb53b2d22be7aedee3f636636a1333ddf1f6e8c35ddf83 not found: ID does not exist" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.447897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" (UID: "8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.459358 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530599 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530614 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530624 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530630 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530639 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530645 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530678 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530694 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530705 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530711 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530722 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530728 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530754 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530761 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530770 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530777 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530785 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530791 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530805 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530811 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530820 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530825 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="extract-utilities" Feb 21 06:50:33 crc kubenswrapper[4820]: E0221 06:50:33.530833 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530838 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="extract-content" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530920 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" containerName="oauth-openshift" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530928 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad8f1e2-40cf-4c0b-aa35-d737387eca67" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530938 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530945 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.530952 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" containerName="registry-server" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.531345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.534780 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.535899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.538285 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.538494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539131 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539285 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539406 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.539902 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.542742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.542848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.546988 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.547900 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.549944 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.552457 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.558847 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.661978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.662232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.702467 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd96409-63d5-46a5-a9cb-a8e59f7fcce8" path="/var/lib/kubelet/pods/0dd96409-63d5-46a5-a9cb-a8e59f7fcce8/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.703125 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328474dd-edf9-4d6b-b9d9-50f591176ce1" path="/var/lib/kubelet/pods/328474dd-edf9-4d6b-b9d9-50f591176ce1/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.703850 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b27a90-ce04-40f3-9656-148cca792c55" path="/var/lib/kubelet/pods/a2b27a90-ce04-40f3-9656-148cca792c55/volumes" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.733020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.735547 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-568r2"] Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762891 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762974 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.762987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-dir\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763118 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763723 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763751 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.763772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.764297 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.766050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-audit-policies\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.766631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-service-ca\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767410 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-router-certs\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.767488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.768423 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-login\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-template-error\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-session\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.769993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.771035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.771715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe964e16-a5ab-4149-a65d-ad052695d25a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.779252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46c7\" (UniqueName: \"kubernetes.io/projected/fe964e16-a5ab-4149-a65d-ad052695d25a-kube-api-access-t46c7\") pod \"oauth-openshift-86648b79cc-g95bw\" (UID: \"fe964e16-a5ab-4149-a65d-ad052695d25a\") " pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:33 crc kubenswrapper[4820]: I0221 06:50:33.889782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:34 crc kubenswrapper[4820]: I0221 06:50:34.276023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86648b79cc-g95bw"] Feb 21 06:50:34 crc kubenswrapper[4820]: I0221 06:50:34.410598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" event={"ID":"fe964e16-a5ab-4149-a65d-ad052695d25a","Type":"ContainerStarted","Data":"db68ead3824bb56ef79f60fd86d2fcac30607473272dc33b11592ae1794e8383"} Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.418809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" event={"ID":"fe964e16-a5ab-4149-a65d-ad052695d25a","Type":"ContainerStarted","Data":"a7456fcb33119538b84b9924c19c422849a220bac6941bb092e769a51c221c7f"} Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.419268 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.423782 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.442779 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86648b79cc-g95bw" podStartSLOduration=29.442761748 podStartE2EDuration="29.442761748s" podCreationTimestamp="2026-02-21 06:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:50:35.440198906 +0000 UTC m=+210.473283104" watchObservedRunningTime="2026-02-21 06:50:35.442761748 +0000 UTC m=+210.475845946" Feb 21 06:50:35 crc kubenswrapper[4820]: I0221 06:50:35.703009 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6" path="/var/lib/kubelet/pods/8e2dfe2b-ebe3-473b-86f4-ad24d6e9efd6/volumes" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816051 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816404 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816878 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:50:43 crc kubenswrapper[4820]: I0221 06:50:43.816931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" gracePeriod=600 Feb 21 06:50:44 crc kubenswrapper[4820]: I0221 06:50:44.467513 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" exitCode=0 Feb 21 06:50:44 crc kubenswrapper[4820]: I0221 06:50:44.467655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb"} Feb 21 06:50:45 crc kubenswrapper[4820]: I0221 06:50:45.474901 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.014347 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015882 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016077 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016125 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016169 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" gracePeriod=15 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.015919 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016419 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016451 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016459 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016476 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016490 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016499 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016514 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016522 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016538 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016547 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016560 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016568 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016691 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016702 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016715 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016727 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016736 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016745 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.016879 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.016889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.017000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.063624 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134133 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.134254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235437 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.235637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.356855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:51:03 crc kubenswrapper[4820]: E0221 06:51:03.379589 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.558889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6a7e3e659ca26bc70cc318d42e85eaae342dc2e65808645fd4fc3f3a6a00589b"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.561546 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.563618 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564619 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564645 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564653 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564765 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" exitCode=2 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.564821 4820 scope.go:117] "RemoveContainer" containerID="97c1998340520b9e766ee2c8dc87902a46648b1bed3dfa9c3183cb9111791eff" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567144 4820 generic.go:334] "Generic (PLEG): container finished" podID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerID="fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902" exitCode=0 Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerDied","Data":"fd9f1cc14dd093044b63334c794d4b11879cbbf515e8afde37172cf044869902"} Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.567731 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.569148 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:03 crc kubenswrapper[4820]: I0221 06:51:03.569574 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.573227 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c"} Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.574337 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.574536 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.577583 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.868705 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.869178 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:04 crc kubenswrapper[4820]: I0221 06:51:04.869441 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057770 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058372 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") pod \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\" (UID: \"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.057868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058565 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.058580 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.062637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" (UID: "f6ac3e04-b33d-46c2-8935-502b7c8d4bfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.159443 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6ac3e04-b33d-46c2-8935-502b7c8d4bfc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.334736 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.386433 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.387128 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.387794 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.388267 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.388507 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.564768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.564907 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565178 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565375 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565387 4820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.565397 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.585304 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586025 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" exitCode=0 Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586095 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.586144 4820 scope.go:117] "RemoveContainer" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587523 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6ac3e04-b33d-46c2-8935-502b7c8d4bfc","Type":"ContainerDied","Data":"ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a"} Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.587855 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3d99f4b7f6fa7c22c17938bf828ace3cc179b6210328514dd88609939f9c8a" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.601346 4820 scope.go:117] "RemoveContainer" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.602544 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.602803 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603087 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603460 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603710 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.603977 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.616500 4820 scope.go:117] "RemoveContainer" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.633491 4820 scope.go:117] "RemoveContainer" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.644569 4820 scope.go:117] "RemoveContainer" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.660974 4820 scope.go:117] "RemoveContainer" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.680126 4820 scope.go:117] "RemoveContainer" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.681697 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": container with ID starting with 0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52 not found: ID does not exist" containerID="0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.681739 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52"} err="failed to get container status \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": rpc error: code = NotFound desc = could not find container \"0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52\": container with ID starting with 0eb3460b9a3ab4abb83f4bff87cee12d00281b5bf4c0f09156b23d554a8aaf52 not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.681770 4820 scope.go:117] "RemoveContainer" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682150 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": container with ID starting with 20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c not found: ID does not exist" containerID="20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682188 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c"} err="failed to get container status \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": rpc error: code = NotFound desc = could not find container \"20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c\": container with ID starting with 20c503277c0ffb03919078429037ca9c02d108151d793c8d6bde2b65664ab43c not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682209 4820 scope.go:117] "RemoveContainer" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682552 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": container with ID starting with 29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c not found: ID does not exist" containerID="29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682594 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c"} err="failed to get container status \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": rpc error: code = NotFound desc = could not find container \"29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c\": container with ID starting with 29610cb4783c03b805e1209cc70ba659227585a656e621f9ef02b44d9c46951c not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682623 4820 scope.go:117] "RemoveContainer" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.682898 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": container with ID starting with eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81 not found: ID does not exist" containerID="eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682921 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81"} err="failed to get container status \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": rpc error: code = NotFound desc = could not find container \"eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81\": container with ID starting with eebd35cdb3b9c465e3ba92216c9faf081ab586409ac16969ce94c028ba434a81 not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.682936 4820 scope.go:117] "RemoveContainer" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.683147 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": container with ID starting with ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb not found: ID does not exist" containerID="ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683170 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb"} err="failed to get container status \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": rpc error: code = NotFound desc = could not find container \"ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb\": container with ID starting with ceefcc23979fc1f2f4252f470f8d51940d0ef32d55e8588a07268ccca4a921cb not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683185 4820 scope.go:117] "RemoveContainer" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: E0221 06:51:05.683409 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": container with ID starting with 4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff not found: ID does not exist" containerID="4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.683435 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff"} err="failed to get container status \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": rpc error: code = NotFound desc = could not find container \"4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff\": container with ID starting with 4e91babb23bc88fd14ff20561497302d5ee770bd73b58ce08707c4e740e3efff not found: ID does not exist" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.698433 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.698926 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.699116 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:05 crc kubenswrapper[4820]: I0221 06:51:05.707121 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.443633 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.444474 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.445364 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.445714 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.447066 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:06 crc kubenswrapper[4820]: I0221 06:51:06.447816 4820 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.449850 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 21 06:51:06 crc kubenswrapper[4820]: E0221 06:51:06.650274 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 21 06:51:07 crc kubenswrapper[4820]: E0221 06:51:07.051065 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 21 06:51:07 crc kubenswrapper[4820]: E0221 06:51:07.852173 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 21 06:51:09 crc kubenswrapper[4820]: E0221 06:51:09.452954 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 21 06:51:12 crc kubenswrapper[4820]: E0221 06:51:12.654054 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.696674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.698414 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.700150 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.726291 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.726333 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:14 crc kubenswrapper[4820]: E0221 06:51:14.726791 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: I0221 06:51:14.727358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:14 crc kubenswrapper[4820]: E0221 06:51:14.794404 4820 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" volumeName="registry-storage" Feb 21 06:51:15 crc kubenswrapper[4820]: E0221 06:51:15.336345 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896305944a78753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,LastTimestamp:2026-02-21 06:51:03.378921299 +0000 UTC m=+238.412005497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.592599 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.592652 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640450 4820 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="735fee1979cdeff66130e88134841588ea4b7ebd53bc0aef95ad9dc4bfefaa0d" exitCode=0 Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640522 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"735fee1979cdeff66130e88134841588ea4b7ebd53bc0aef95ad9dc4bfefaa0d"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86089d6d48f38f72b42419f0bbfcc84842000fc9b9ba12de2e9e5e7a692525c6"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640803 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.640815 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.641184 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: E0221 06:51:15.641202 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.641416 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643361 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643433 4820 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551" exitCode=1 Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.643474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551"} Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644049 4820 scope.go:117] "RemoveContainer" containerID="f374d80b02828a8a35b4216caaaa0088634fbe33a32751576f222f65742dd551" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644511 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.644884 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.645476 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.710778 4820 status_manager.go:851] "Failed to get status for pod" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.711107 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.711608 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:15 crc kubenswrapper[4820]: I0221 06:51:15.712018 4820 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.655713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e221a34bf2d85decc5e599515c1b73d92baf191aa8663e0a9c9c1399c5e20a23"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a56edaf54afc18e2138d5ebe923db837210206e361e79cd6f6bfd6560d99a8d"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b99e59f857c162668545dcc89db7d87f03f217ecd234e28cc61ba48e2b884d4"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.656483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b1ff58c79d6f3757a338637ab24928f6a0dc80677aad8d82b75fb74fe819142f"} Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.659280 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 21 06:51:16 crc kubenswrapper[4820]: I0221 06:51:16.659405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"660f4b8a1ef45b3186a7b148aa1774d6e3898d55c8964df158d023b88bb35ea5"} Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.592329 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.596520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35789204aeb3e66dd268372ed837296102ab2ba444ce13c8983ad2986d638b98"} Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.668989 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:17 crc kubenswrapper[4820]: I0221 06:51:17.669016 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.727874 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.728185 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:19 crc kubenswrapper[4820]: I0221 06:51:19.733009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:22 crc kubenswrapper[4820]: I0221 06:51:22.695549 4820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.699394 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.700087 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.705583 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.706005 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:23 crc kubenswrapper[4820]: I0221 06:51:23.711889 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebb6d7cc-a7f1-4d6c-9fdf-debc48af3b5c" Feb 21 06:51:24 crc kubenswrapper[4820]: I0221 06:51:24.703106 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:24 crc kubenswrapper[4820]: I0221 06:51:24.703134 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.706615 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.706900 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5bd5c3ae-202a-4133-9af8-c4f2e51eea00" Feb 21 06:51:25 crc kubenswrapper[4820]: I0221 06:51:25.711925 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebb6d7cc-a7f1-4d6c-9fdf-debc48af3b5c" Feb 21 06:51:28 crc kubenswrapper[4820]: I0221 06:51:28.200591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 21 06:51:32 crc kubenswrapper[4820]: I0221 06:51:32.009885 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 21 06:51:32 crc kubenswrapper[4820]: I0221 06:51:32.442761 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 21 06:51:33 crc kubenswrapper[4820]: I0221 06:51:33.289827 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:33 crc kubenswrapper[4820]: I0221 06:51:33.313373 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.278529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.414014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 21 06:51:34 crc kubenswrapper[4820]: I0221 06:51:34.896264 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.067377 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.110510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.194694 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.459749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.854942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 21 06:51:35 crc kubenswrapper[4820]: I0221 06:51:35.886668 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.143484 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.464888 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.540537 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.547689 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.551562 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.578622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.593090 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.652326 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.754080 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.775717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.862785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.865301 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.951777 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.952401 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 21 06:51:36 crc kubenswrapper[4820]: I0221 06:51:36.989767 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.067594 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.093968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.297638 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.402116 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.442141 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.529448 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.592371 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.624992 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.762922 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.811948 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.833319 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.839916 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 21 06:51:37 crc kubenswrapper[4820]: I0221 06:51:37.933258 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.154536 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.231820 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.258786 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.308796 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.331182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.339258 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.392082 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.466990 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.555995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.573281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.610099 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.722181 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.987636 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 21 06:51:38 crc kubenswrapper[4820]: I0221 06:51:38.990813 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.000993 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.019716 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.029602 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.068249 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.078265 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.095022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.206889 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.264737 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.292982 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.348455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.435897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.459547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.480812 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.481140 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.537295 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.583080 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.584341 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.587079 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.621499 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.659481 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.682281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.695578 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.698096 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.853022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.928470 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 21 06:51:39 crc kubenswrapper[4820]: I0221 06:51:39.939872 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.004083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.013020 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.015402 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.032930 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.075818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.093624 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.109174 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.152320 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.196534 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.309136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.322043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.323621 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.368524 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.381908 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.474015 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.480424 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.562452 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.570029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.593582 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.862426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 21 06:51:40 crc kubenswrapper[4820]: I0221 06:51:40.930222 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.045356 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.118578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.125723 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.161645 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.164343 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.194146 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.216494 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.232012 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.232259 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.245744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.258879 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.273135 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.292544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.390894 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.402773 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.528386 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.575649 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.596473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.618709 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.639327 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.772408 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.808678 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.837261 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.886882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.908937 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 21 06:51:41 crc kubenswrapper[4820]: I0221 06:51:41.922231 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.048643 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.090491 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.150739 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.171331 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.496611 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.601564 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.649013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.738253 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.827931 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.899438 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.899492 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 21 06:51:42 crc kubenswrapper[4820]: I0221 06:51:42.961266 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.081216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.119473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.501098 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.567139 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.601209 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.619590 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.651845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.732814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 21 06:51:43 crc kubenswrapper[4820]: I0221 06:51:43.948544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.003307 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.052973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.134112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.144529 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.169459 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.191510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.212233 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.236925 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.250935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.290067 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.414512 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.450426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.542837 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.756157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.898132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.965124 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 21 06:51:44 crc kubenswrapper[4820]: I0221 06:51:44.971547 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.001775 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.048828 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.080348 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.184597 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.199818 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.214132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.304311 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.359375 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.366107 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.407066 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.453838 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.597078 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.598369 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.616729 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.634304 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.928826 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.932763 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 21 06:51:45 crc kubenswrapper[4820]: I0221 06:51:45.939888 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.015445 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.095331 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.196073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.240619 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.342934 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.344319 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.355381 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.386611 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.461630 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.524598 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.586347 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.604599 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.695803 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.700935 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.749556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.856906 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 21 06:51:46 crc kubenswrapper[4820]: I0221 06:51:46.957854 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.139340 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.191765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.317766 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.324724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.330880 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.363990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.367384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.536791 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.565568 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.670321 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.775498 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.849880 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.887680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.897995 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 21 06:51:47 crc kubenswrapper[4820]: I0221 06:51:47.954428 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.179631 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.243317 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.284942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.308661 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.336179 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.433900 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.496969 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.634426 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.680102 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.786902 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.845927 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.900964 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.965371 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.986295 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.989997 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.989969981 podStartE2EDuration="45.989969981s" podCreationTimestamp="2026-02-21 06:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:22.399957721 +0000 UTC m=+257.433041949" watchObservedRunningTime="2026-02-21 06:51:48.989969981 +0000 UTC m=+284.023054229" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993169 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993239 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/marketplace-operator-79b997595-wq5r9"] Feb 21 06:51:48 crc kubenswrapper[4820]: E0221 06:51:48.993554 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.993760 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac3e04-b33d-46c2-8935-502b7c8d4bfc" containerName="installer" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw","openshift-marketplace/community-operators-gt7zt","openshift-marketplace/redhat-marketplace-wfq7z","openshift-marketplace/redhat-operators-zcn45","openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994385 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994646 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dtbbw" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" containerID="cri-o://da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994764 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gt7zt" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" containerID="cri-o://88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.994890 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfq7z" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" containerID="cri-o://10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.995042 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcn45" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" containerID="cri-o://5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" gracePeriod=30 Feb 21 06:51:48 crc kubenswrapper[4820]: I0221 06:51:48.995130 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" containerID="cri-o://d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" gracePeriod=30 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.011182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.016328 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.061819 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.061803249 podStartE2EDuration="27.061803249s" podCreationTimestamp="2026-02-21 06:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:49.03387436 +0000 UTC m=+284.066958558" watchObservedRunningTime="2026-02-21 06:51:49.061803249 +0000 UTC m=+284.094887437" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.075391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.146005 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176704 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.176737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.178489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.184323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37683f41-a9aa-4abd-809d-25df5114e93a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.202412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6h8r\" (UniqueName: \"kubernetes.io/projected/37683f41-a9aa-4abd-809d-25df5114e93a-kube-api-access-k6h8r\") pod \"marketplace-operator-79b997595-wq5r9\" (UID: \"37683f41-a9aa-4abd-809d-25df5114e93a\") " pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.205445 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.313339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.313583 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.403224 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.412127 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.418411 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.427622 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.429462 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.430376 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.439586 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.472781 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489566 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") pod \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\" (UID: \"9c9aa300-090c-44cb-91ed-1c1bdc44cbae\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489767 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489849 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") pod \"73ed3342-c0c6-46e6-a021-e3c6578829f6\" (UID: \"73ed3342-c0c6-46e6-a021-e3c6578829f6\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") pod \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\" (UID: \"62bc411a-7f2e-4a7c-8a27-d758d4716f0e\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489922 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") pod \"04595c48-2a70-4760-8e24-5266735b9e82\" (UID: \"04595c48-2a70-4760-8e24-5266735b9e82\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.489942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") pod \"88718c88-6c0d-4eb1-af7e-14353e291e27\" (UID: \"88718c88-6c0d-4eb1-af7e-14353e291e27\") " Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.490711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities" (OuterVolumeSpecName: "utilities") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.491248 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities" (OuterVolumeSpecName: "utilities") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.491809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4" (OuterVolumeSpecName: "kube-api-access-snxb4") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "kube-api-access-snxb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.492221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities" (OuterVolumeSpecName: "utilities") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.492486 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities" (OuterVolumeSpecName: "utilities") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.493181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.493468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8" (OuterVolumeSpecName: "kube-api-access-rknh8") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "kube-api-access-rknh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.494801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c" (OuterVolumeSpecName: "kube-api-access-d498c") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "kube-api-access-d498c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.495285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw" (OuterVolumeSpecName: "kube-api-access-b7jnw") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "kube-api-access-b7jnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.495828 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb" (OuterVolumeSpecName: "kube-api-access-c4wvb") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "kube-api-access-c4wvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.503758 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "73ed3342-c0c6-46e6-a021-e3c6578829f6" (UID: "73ed3342-c0c6-46e6-a021-e3c6578829f6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.522584 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62bc411a-7f2e-4a7c-8a27-d758d4716f0e" (UID: "62bc411a-7f2e-4a7c-8a27-d758d4716f0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.544065 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.550041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c9aa300-090c-44cb-91ed-1c1bdc44cbae" (UID: "9c9aa300-090c-44cb-91ed-1c1bdc44cbae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.560043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88718c88-6c0d-4eb1-af7e-14353e291e27" (UID: "88718c88-6c0d-4eb1-af7e-14353e291e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.571644 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590356 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7jnw\" (UniqueName: \"kubernetes.io/projected/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-kube-api-access-b7jnw\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590383 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590391 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590400 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590408 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rknh8\" (UniqueName: \"kubernetes.io/projected/88718c88-6c0d-4eb1-af7e-14353e291e27-kube-api-access-rknh8\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590417 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wvb\" (UniqueName: \"kubernetes.io/projected/73ed3342-c0c6-46e6-a021-e3c6578829f6-kube-api-access-c4wvb\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590436 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590444 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ed3342-c0c6-46e6-a021-e3c6578829f6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590451 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bc411a-7f2e-4a7c-8a27-d758d4716f0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590459 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590467 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snxb4\" (UniqueName: \"kubernetes.io/projected/04595c48-2a70-4760-8e24-5266735b9e82-kube-api-access-snxb4\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590476 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88718c88-6c0d-4eb1-af7e-14353e291e27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.590484 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d498c\" (UniqueName: \"kubernetes.io/projected/9c9aa300-090c-44cb-91ed-1c1bdc44cbae-kube-api-access-d498c\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.598215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.622175 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04595c48-2a70-4760-8e24-5266735b9e82" (UID: "04595c48-2a70-4760-8e24-5266735b9e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.686730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.691043 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04595c48-2a70-4760-8e24-5266735b9e82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.722983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wq5r9"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.731844 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.825285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" event={"ID":"37683f41-a9aa-4abd-809d-25df5114e93a","Type":"ContainerStarted","Data":"ae5150e53962cd919ec6950a7adfca79ac114b6fa78479c0e6d8bb6e7c605f4b"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827265 4820 generic.go:334] "Generic (PLEG): container finished" podID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827337 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dtbbw" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827356 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dtbbw" event={"ID":"9c9aa300-090c-44cb-91ed-1c1bdc44cbae","Type":"ContainerDied","Data":"1186b29bef767e21ec1c625c6cc6253779166154a0a774141ac1f83ba9af24e6"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.827804 4820 scope.go:117] "RemoveContainer" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829862 4820 generic.go:334] "Generic (PLEG): container finished" podID="04595c48-2a70-4760-8e24-5266735b9e82" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcn45" event={"ID":"04595c48-2a70-4760-8e24-5266735b9e82","Type":"ContainerDied","Data":"85b548c074e9ee1f2673409e289d81bf0908133ef92294aa7291d120aa6cc445"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.829931 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcn45" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833212 4820 generic.go:334] "Generic (PLEG): container finished" podID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833281 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfq7z" event={"ID":"62bc411a-7f2e-4a7c-8a27-d758d4716f0e","Type":"ContainerDied","Data":"0313a503380cf7228ea4e19fb74b8d644a2d0e9f2e03718d0432d7e8be1cd955"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.833357 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfq7z" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837870 4820 generic.go:334] "Generic (PLEG): container finished" podID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerDied","Data":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" event={"ID":"73ed3342-c0c6-46e6-a021-e3c6578829f6","Type":"ContainerDied","Data":"c67db1d6ea1ea9f42d159552b399ae3814a8a2a153770e3fc34b2a49bbb171e0"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.837974 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k58x6" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841126 4820 generic.go:334] "Generic (PLEG): container finished" podID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" exitCode=0 Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841320 4820 scope.go:117] "RemoveContainer" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841465 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gt7zt" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.841733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gt7zt" event={"ID":"88718c88-6c0d-4eb1-af7e-14353e291e27","Type":"ContainerDied","Data":"04fd41dbab4d8a603151ac33844cbba8ff658b873d854bbf23d1ef0e3e50dc39"} Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.849884 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.854253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dtbbw"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.865716 4820 scope.go:117] "RemoveContainer" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.866752 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.871965 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfq7z"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.876688 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.882400 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcn45"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886445 4820 scope.go:117] "RemoveContainer" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.886881 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": container with ID starting with da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9 not found: ID does not exist" containerID="da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886923 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9"} err="failed to get container status \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": rpc error: code = NotFound desc = could not find container \"da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9\": container with ID starting with da0c9693cf43836172e499d92d26e7ddd9205a7938632f1671119639f5365ff9 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.886956 4820 scope.go:117] "RemoveContainer" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.887263 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": container with ID starting with b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72 not found: ID does not exist" containerID="b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887294 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72"} err="failed to get container status \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": rpc error: code = NotFound desc = could not find container \"b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72\": container with ID starting with b59354749c69abe6b87355cf2c50eb9f1ea3238cf6b73afa7ef065b9a486cc72 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887313 4820 scope.go:117] "RemoveContainer" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.887585 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": container with ID starting with 7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38 not found: ID does not exist" containerID="7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887618 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38"} err="failed to get container status \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": rpc error: code = NotFound desc = could not find container \"7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38\": container with ID starting with 7837d7e0c8923a061d166b3396d700b8042ab7a51d1d0b1e794f39bba05c3b38 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.887646 4820 scope.go:117] "RemoveContainer" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.890979 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k58x6"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.898448 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.901237 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gt7zt"] Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.901376 4820 scope.go:117] "RemoveContainer" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.914599 4820 scope.go:117] "RemoveContainer" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.924860 4820 scope.go:117] "RemoveContainer" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.925291 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": container with ID starting with 5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1 not found: ID does not exist" containerID="5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925323 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1"} err="failed to get container status \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": rpc error: code = NotFound desc = could not find container \"5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1\": container with ID starting with 5ecf8abced6124ff5815e95d8e927124a6dd4218b43ed6bbabe7689c22a156d1 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925354 4820 scope.go:117] "RemoveContainer" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.925719 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": container with ID starting with dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8 not found: ID does not exist" containerID="dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925742 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8"} err="failed to get container status \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": rpc error: code = NotFound desc = could not find container \"dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8\": container with ID starting with dca8029b5ccbf3abb0e8bc57042cf5193e84bba5efa4b0c3f6ae72841f88dfb8 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.925758 4820 scope.go:117] "RemoveContainer" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.926032 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": container with ID starting with 136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597 not found: ID does not exist" containerID="136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.926058 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597"} err="failed to get container status \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": rpc error: code = NotFound desc = could not find container \"136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597\": container with ID starting with 136c19d7e530a53b30ebe0cf2125d06220c0617d5837092339f5c5f27fa5e597 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.926076 4820 scope.go:117] "RemoveContainer" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.941415 4820 scope.go:117] "RemoveContainer" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.950668 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.961610 4820 scope.go:117] "RemoveContainer" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982145 4820 scope.go:117] "RemoveContainer" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.982710 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": container with ID starting with 10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e not found: ID does not exist" containerID="10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982769 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e"} err="failed to get container status \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": rpc error: code = NotFound desc = could not find container \"10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e\": container with ID starting with 10fd66708ebbefea016086c32a77c1a5211bd6d0948d72749a1ed2bacbe9b99e not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.982797 4820 scope.go:117] "RemoveContainer" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.983144 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": container with ID starting with a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac not found: ID does not exist" containerID="a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.983230 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac"} err="failed to get container status \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": rpc error: code = NotFound desc = could not find container \"a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac\": container with ID starting with a4f94c57678944e7b84a2985baf959347b84ef256b70d75dfcef8ab22531b5ac not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.983362 4820 scope.go:117] "RemoveContainer" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: E0221 06:51:49.984204 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": container with ID starting with ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03 not found: ID does not exist" containerID="ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.984227 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03"} err="failed to get container status \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": rpc error: code = NotFound desc = could not find container \"ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03\": container with ID starting with ef5d00cf55ecb6ccbb756c8326e16979536e24c65a6997f1a49b6e54c9b18d03 not found: ID does not exist" Feb 21 06:51:49 crc kubenswrapper[4820]: I0221 06:51:49.984242 4820 scope.go:117] "RemoveContainer" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999486 4820 scope.go:117] "RemoveContainer" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:49.999757 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": container with ID starting with d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855 not found: ID does not exist" containerID="d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855"} err="failed to get container status \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": rpc error: code = NotFound desc = could not find container \"d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855\": container with ID starting with d80640b2773658c6a86c085b54692e7c9afad2fb9b673cd15fba0a08b93ad855 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:49.999791 4820 scope.go:117] "RemoveContainer" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.014458 4820 scope.go:117] "RemoveContainer" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.030537 4820 scope.go:117] "RemoveContainer" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.032007 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.043754 4820 scope.go:117] "RemoveContainer" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.044141 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": container with ID starting with 88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8 not found: ID does not exist" containerID="88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044244 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8"} err="failed to get container status \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": rpc error: code = NotFound desc = could not find container \"88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8\": container with ID starting with 88ff9419da2b03a717ff45987cace5318ff803968a7f85a18fdbd899e047b4a8 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044339 4820 scope.go:117] "RemoveContainer" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.044769 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": container with ID starting with ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79 not found: ID does not exist" containerID="ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044795 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79"} err="failed to get container status \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": rpc error: code = NotFound desc = could not find container \"ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79\": container with ID starting with ef8e0457fa8a84bb9bee81f3c950b2df5318b05d630e936f7b3f27acb8e24a79 not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.044814 4820 scope.go:117] "RemoveContainer" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: E0221 06:51:50.045471 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": container with ID starting with 74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd not found: ID does not exist" containerID="74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.045596 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd"} err="failed to get container status \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": rpc error: code = NotFound desc = could not find container \"74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd\": container with ID starting with 74423505c89df74d8b59f346b338557646ec65a610a5b1e89b6678803cd7f3bd not found: ID does not exist" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.448126 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.598683 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.754687 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.852936 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" event={"ID":"37683f41-a9aa-4abd-809d-25df5114e93a","Type":"ContainerStarted","Data":"cea7128fa56505882f5ed35821fa8f478e31a70c1b3af21a7c999c22c108f559"} Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.853225 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.856591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.870930 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wq5r9" podStartSLOduration=12.870911591 podStartE2EDuration="12.870911591s" podCreationTimestamp="2026-02-21 06:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:51:50.866998234 +0000 UTC m=+285.900082432" watchObservedRunningTime="2026-02-21 06:51:50.870911591 +0000 UTC m=+285.903995789" Feb 21 06:51:50 crc kubenswrapper[4820]: I0221 06:51:50.992882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.149331 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.699903 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.703154 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04595c48-2a70-4760-8e24-5266735b9e82" path="/var/lib/kubelet/pods/04595c48-2a70-4760-8e24-5266735b9e82/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.703972 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" path="/var/lib/kubelet/pods/62bc411a-7f2e-4a7c-8a27-d758d4716f0e/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.704733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" path="/var/lib/kubelet/pods/73ed3342-c0c6-46e6-a021-e3c6578829f6/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.705815 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" path="/var/lib/kubelet/pods/88718c88-6c0d-4eb1-af7e-14353e291e27/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.706594 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" path="/var/lib/kubelet/pods/9c9aa300-090c-44cb-91ed-1c1bdc44cbae/volumes" Feb 21 06:51:51 crc kubenswrapper[4820]: I0221 06:51:51.774559 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 21 06:51:52 crc kubenswrapper[4820]: I0221 06:51:52.083490 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 21 06:51:56 crc kubenswrapper[4820]: I0221 06:51:56.387575 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:51:56 crc kubenswrapper[4820]: I0221 06:51:56.388333 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" gracePeriod=5 Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.908296 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.908710 4820 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" exitCode=137 Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.951004 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:01 crc kubenswrapper[4820]: I0221 06:52:01.951292 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138563 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.138958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139136 4820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139148 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139157 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.139166 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.145343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.239929 4820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915683 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915751 4820 scope.go:117] "RemoveContainer" containerID="07d46c1f5856a8c7fb05f92172496c8e5e14d734b24a90cd2abe32a41c2d224c" Feb 21 06:52:02 crc kubenswrapper[4820]: I0221 06:52:02.915842 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.702675 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.703691 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.713753 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.713794 4820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="58731c15-9a5c-48a8-b456-56e4448dae4f" Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.717848 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 21 06:52:03 crc kubenswrapper[4820]: I0221 06:52:03.717981 4820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="58731c15-9a5c-48a8-b456-56e4448dae4f" Feb 21 06:52:05 crc kubenswrapper[4820]: I0221 06:52:05.469872 4820 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.408516 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.408716 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" containerID="cri-o://a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" gracePeriod=30 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.509808 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.510007 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" containerID="cri-o://d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" gracePeriod=30 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.808915 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.869540 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938144 4820 generic.go:334] "Generic (PLEG): container finished" podID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" exitCode=0 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerDied","Data":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhsbz" event={"ID":"bec4e07b-2745-4a45-8717-3ee01f99919e","Type":"ContainerDied","Data":"4d78f1e45a0c6a4cb8ba55254cd92ac8d35c6e02d5bd767c1be192646a5e40fd"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.938280 4820 scope.go:117] "RemoveContainer" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939685 4820 generic.go:334] "Generic (PLEG): container finished" podID="a584a459-0672-47ef-bb32-c79f31790f91" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" exitCode=0 Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerDied","Data":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" event={"ID":"a584a459-0672-47ef-bb32-c79f31790f91","Type":"ContainerDied","Data":"9f4896a106314bc994acfd7faee81b0d6630a37fbb60ec630db8d04e58c2928f"} Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.939786 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953454 4820 scope.go:117] "RemoveContainer" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: E0221 06:52:06.953837 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": container with ID starting with a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053 not found: ID does not exist" containerID="a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953869 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053"} err="failed to get container status \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": rpc error: code = NotFound desc = could not find container \"a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053\": container with ID starting with a77159ba8f797106c177181210f22dfea46ce4673d4c139331451e5ce3784053 not found: ID does not exist" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.953890 4820 scope.go:117] "RemoveContainer" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.965215 4820 scope.go:117] "RemoveContainer" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: E0221 06:52:06.965573 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": container with ID starting with d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358 not found: ID does not exist" containerID="d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.965597 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358"} err="failed to get container status \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": rpc error: code = NotFound desc = could not find container \"d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358\": container with ID starting with d2738ac45e957ed10d941192d14e0c1e115481338254c9f84f5fe963f5be8358 not found: ID does not exist" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994159 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994226 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994286 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994329 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") pod \"a584a459-0672-47ef-bb32-c79f31790f91\" (UID: \"a584a459-0672-47ef-bb32-c79f31790f91\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.994454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") pod \"bec4e07b-2745-4a45-8717-3ee01f99919e\" (UID: \"bec4e07b-2745-4a45-8717-3ee01f99919e\") " Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca" (OuterVolumeSpecName: "client-ca") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config" (OuterVolumeSpecName: "config") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config" (OuterVolumeSpecName: "config") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.995725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999499 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr" (OuterVolumeSpecName: "kube-api-access-qxxhr") pod "bec4e07b-2745-4a45-8717-3ee01f99919e" (UID: "bec4e07b-2745-4a45-8717-3ee01f99919e"). InnerVolumeSpecName "kube-api-access-qxxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:06 crc kubenswrapper[4820]: I0221 06:52:06.999682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx" (OuterVolumeSpecName: "kube-api-access-84hkx") pod "a584a459-0672-47ef-bb32-c79f31790f91" (UID: "a584a459-0672-47ef-bb32-c79f31790f91"). InnerVolumeSpecName "kube-api-access-84hkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095771 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a584a459-0672-47ef-bb32-c79f31790f91-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095810 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095819 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095827 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095836 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a584a459-0672-47ef-bb32-c79f31790f91-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095845 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hkx\" (UniqueName: \"kubernetes.io/projected/a584a459-0672-47ef-bb32-c79f31790f91-kube-api-access-84hkx\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095855 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxhr\" (UniqueName: \"kubernetes.io/projected/bec4e07b-2745-4a45-8717-3ee01f99919e-kube-api-access-qxxhr\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095863 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bec4e07b-2745-4a45-8717-3ee01f99919e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.095872 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec4e07b-2745-4a45-8717-3ee01f99919e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.275170 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.281447 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4vn9x"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.284455 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.287013 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhsbz"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.584794 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585080 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585091 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585098 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585111 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585118 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585131 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585146 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585163 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585171 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585184 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585191 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585203 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585210 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585226 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585256 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-utilities" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585298 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585308 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585323 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585330 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="extract-content" Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.585352 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585362 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9aa300-090c-44cb-91ed-1c1bdc44cbae" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585477 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ed3342-c0c6-46e6-a021-e3c6578829f6" containerName="marketplace-operator" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585489 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bc411a-7f2e-4a7c-8a27-d758d4716f0e" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585498 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585508 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="88718c88-6c0d-4eb1-af7e-14353e291e27" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585517 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="04595c48-2a70-4760-8e24-5266735b9e82" containerName="registry-server" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585524 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" containerName="controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585531 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a584a459-0672-47ef-bb32-c79f31790f91" containerName="route-controller-manager" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.585992 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588177 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588663 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588788 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.588923 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.589021 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.589193 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.590978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.590980 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591087 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.591529 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.592182 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.596733 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.598018 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.604401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.702087 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a584a459-0672-47ef-bb32-c79f31790f91" path="/var/lib/kubelet/pods/a584a459-0672-47ef-bb32-c79f31790f91/volumes" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.702612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec4e07b-2745-4a45-8717-3ee01f99919e" path="/var/lib/kubelet/pods/bec4e07b-2745-4a45-8717-3ee01f99919e/volumes" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.703997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.704172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.805987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806133 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.806191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.808948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.809037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.809201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.817219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.817403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.825125 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:07 crc kubenswrapper[4820]: E0221 06:52:07.825717 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-6vcbz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" podUID="d9a882b7-b656-49ef-8854-266b0c82f673" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.833965 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"route-controller-manager-5d57cfbc8-t8522\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.836713 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.837142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.842108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"controller-manager-76464bf686-krxfn\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.951272 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:07 crc kubenswrapper[4820]: I0221 06:52:07.967044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007560 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.007655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") pod \"d9a882b7-b656-49ef-8854-266b0c82f673\" (UID: \"d9a882b7-b656-49ef-8854-266b0c82f673\") " Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.008370 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config" (OuterVolumeSpecName: "config") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.010929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.011352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz" (OuterVolumeSpecName: "kube-api-access-6vcbz") pod "d9a882b7-b656-49ef-8854-266b0c82f673" (UID: "d9a882b7-b656-49ef-8854-266b0c82f673"). InnerVolumeSpecName "kube-api-access-6vcbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108229 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108581 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108592 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9a882b7-b656-49ef-8854-266b0c82f673-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vcbz\" (UniqueName: \"kubernetes.io/projected/d9a882b7-b656-49ef-8854-266b0c82f673-kube-api-access-6vcbz\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.108613 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a882b7-b656-49ef-8854-266b0c82f673-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.285927 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76464bf686-krxfn" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerStarted","Data":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.957051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerStarted","Data":"6d3634c5d597a3497f486967727d92eb13c45b2cf751ecb5f1c210a3ddc19e37"} Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.956658 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" containerID="cri-o://136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" gracePeriod=30 Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.957095 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.964139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:08 crc kubenswrapper[4820]: I0221 06:52:08.983041 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" podStartSLOduration=2.983021469 podStartE2EDuration="2.983021469s" podCreationTimestamp="2026-02-21 06:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:08.980041437 +0000 UTC m=+304.013125635" watchObservedRunningTime="2026-02-21 06:52:08.983021469 +0000 UTC m=+304.016105677" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.017616 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.018418 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.021687 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.025362 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76464bf686-krxfn"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.027749 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028187 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028368 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.028880 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.029310 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.029446 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.030525 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.039316 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119112 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.119436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220182 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.220214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.221462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.221962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.222435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.240623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.240926 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"controller-manager-846df49455-q45hw\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.318743 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.382678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.422514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") pod \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\" (UID: \"41ae1fcb-c09f-4395-8e87-b5ae4206c608\") " Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.423763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca" (OuterVolumeSpecName: "client-ca") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.424692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config" (OuterVolumeSpecName: "config") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.426549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6" (OuterVolumeSpecName: "kube-api-access-fs6z6") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "kube-api-access-fs6z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.426626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41ae1fcb-c09f-4395-8e87-b5ae4206c608" (UID: "41ae1fcb-c09f-4395-8e87-b5ae4206c608"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524117 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ae1fcb-c09f-4395-8e87-b5ae4206c608-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524161 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs6z6\" (UniqueName: \"kubernetes.io/projected/41ae1fcb-c09f-4395-8e87-b5ae4206c608-kube-api-access-fs6z6\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524180 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.524193 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ae1fcb-c09f-4395-8e87-b5ae4206c608-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.703116 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a882b7-b656-49ef-8854-266b0c82f673" path="/var/lib/kubelet/pods/d9a882b7-b656-49ef-8854-266b0c82f673/volumes" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.776141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:09 crc kubenswrapper[4820]: W0221 06:52:09.785085 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1b83c5_e50f_463a_9392_497a22f7d844.slice/crio-3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6 WatchSource:0}: Error finding container 3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6: Status 404 returned error can't find the container with id 3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6 Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.963927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerStarted","Data":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.964284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerStarted","Data":"3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.964304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966001 4820 generic.go:334] "Generic (PLEG): container finished" podID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" exitCode=0 Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerDied","Data":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" event={"ID":"41ae1fcb-c09f-4395-8e87-b5ae4206c608","Type":"ContainerDied","Data":"6d3634c5d597a3497f486967727d92eb13c45b2cf751ecb5f1c210a3ddc19e37"} Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966105 4820 scope.go:117] "RemoveContainer" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966138 4820 patch_prober.go:28] interesting pod/controller-manager-846df49455-q45hw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966177 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.966224 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.981116 4820 scope.go:117] "RemoveContainer" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: E0221 06:52:09.981684 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": container with ID starting with 136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1 not found: ID does not exist" containerID="136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1" Feb 21 06:52:09 crc kubenswrapper[4820]: I0221 06:52:09.981741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1"} err="failed to get container status \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": rpc error: code = NotFound desc = could not find container \"136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1\": container with ID starting with 136c9ff0914b22fd16a95ebc546848605a3ee71ffa5071f1a94974798a085cf1 not found: ID does not exist" Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.001267 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podStartSLOduration=3.001249095 podStartE2EDuration="3.001249095s" podCreationTimestamp="2026-02-21 06:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:09.984354149 +0000 UTC m=+305.017438337" watchObservedRunningTime="2026-02-21 06:52:10.001249095 +0000 UTC m=+305.034333313" Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.002335 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.005318 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d57cfbc8-t8522"] Feb 21 06:52:10 crc kubenswrapper[4820]: I0221 06:52:10.976944 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:11 crc kubenswrapper[4820]: E0221 06:52:11.588488 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588499 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588576 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" containerName="route-controller-manager" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.588879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591649 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591715 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.591845 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.592014 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.597260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.598891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.616519 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.702159 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ae1fcb-c09f-4395-8e87-b5ae4206c608" path="/var/lib/kubelet/pods/41ae1fcb-c09f-4395-8e87-b5ae4206c608/volumes" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.746841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.747538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.848919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.849089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.850411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-client-ca\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.850553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8d190e9-6eb1-4655-9158-5e563b1e8c67-config\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.855415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d190e9-6eb1-4655-9158-5e563b1e8c67-serving-cert\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.870329 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbsd\" (UniqueName: \"kubernetes.io/projected/b8d190e9-6eb1-4655-9158-5e563b1e8c67-kube-api-access-jlbsd\") pod \"route-controller-manager-77d989666b-5s8ls\" (UID: \"b8d190e9-6eb1-4655-9158-5e563b1e8c67\") " pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:11 crc kubenswrapper[4820]: I0221 06:52:11.904102 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.302639 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls"] Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.984957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" event={"ID":"b8d190e9-6eb1-4655-9158-5e563b1e8c67","Type":"ContainerStarted","Data":"3cf04a51b2889b2ac1c3f0a671ba25776c4615dcf653a0e38f78b8ae1b0ab0df"} Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.985303 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.985377 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" event={"ID":"b8d190e9-6eb1-4655-9158-5e563b1e8c67","Type":"ContainerStarted","Data":"21451f0c238797ca5f0d7d8299f9651f74de2a7ba8e74d1bb34d667341c5c6c2"} Feb 21 06:52:12 crc kubenswrapper[4820]: I0221 06:52:12.993813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" Feb 21 06:52:13 crc kubenswrapper[4820]: I0221 06:52:13.009326 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77d989666b-5s8ls" podStartSLOduration=6.009299899 podStartE2EDuration="6.009299899s" podCreationTimestamp="2026-02-21 06:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:13.006380198 +0000 UTC m=+308.039464406" watchObservedRunningTime="2026-02-21 06:52:13.009299899 +0000 UTC m=+308.042384097" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.521445 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.523425 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.525526 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.534995 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.663793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.715059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.716219 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.718225 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.725254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.764663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.764912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-catalog-content\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.765417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-utilities\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.781970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsbl\" (UniqueName: \"kubernetes.io/projected/ef1d43db-e76a-4d34-8528-4c549bcbc2e2-kube-api-access-qzsbl\") pod \"redhat-marketplace-78dnb\" (UID: \"ef1d43db-e76a-4d34-8528-4c549bcbc2e2\") " pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.839807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.866312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.967855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.968936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-catalog-content\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.969316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa04064f-b88b-4b27-a882-1cbdae3d4485-utilities\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:21 crc kubenswrapper[4820]: I0221 06:52:21.991218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf48h\" (UniqueName: \"kubernetes.io/projected/fa04064f-b88b-4b27-a882-1cbdae3d4485-kube-api-access-bf48h\") pod \"redhat-operators-drqmx\" (UID: \"fa04064f-b88b-4b27-a882-1cbdae3d4485\") " pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.029604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.296579 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78dnb"] Feb 21 06:52:22 crc kubenswrapper[4820]: I0221 06:52:22.399192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drqmx"] Feb 21 06:52:22 crc kubenswrapper[4820]: W0221 06:52:22.405885 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa04064f_b88b_4b27_a882_1cbdae3d4485.slice/crio-b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a WatchSource:0}: Error finding container b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a: Status 404 returned error can't find the container with id b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.033751 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa04064f-b88b-4b27-a882-1cbdae3d4485" containerID="6a6e9bd95512a329ff09949fbe07a697ba7d08c16fc5af5f309fc9df573ee567" exitCode=0 Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.033794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerDied","Data":"6a6e9bd95512a329ff09949fbe07a697ba7d08c16fc5af5f309fc9df573ee567"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.034043 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"b649f0f405fce0d6057d0daaabde59f64ff8ae564a240b0051bf307d47f0d23a"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035676 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef1d43db-e76a-4d34-8528-4c549bcbc2e2" containerID="56d3fb72a651afcc482ce84462dcbdc39f11a6db791756eb6f34b267b246adeb" exitCode=0 Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerDied","Data":"56d3fb72a651afcc482ce84462dcbdc39f11a6db791756eb6f34b267b246adeb"} Feb 21 06:52:23 crc kubenswrapper[4820]: I0221 06:52:23.035776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerStarted","Data":"f2a933260e1873367ab71f82c0867fdf99cb661ff8470019f13d861119167edb"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.041825 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.043546 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef1d43db-e76a-4d34-8528-4c549bcbc2e2" containerID="b7de4a0cd2fc1c3d14eccedb5dd85a9e8ac111167e09a2f35daea3b88303c058" exitCode=0 Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.043581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerDied","Data":"b7de4a0cd2fc1c3d14eccedb5dd85a9e8ac111167e09a2f35daea3b88303c058"} Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.316713 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.318388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.321569 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.324958 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.497853 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.513053 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.514314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.516051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.526863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.598896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-catalog-content\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.599512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c232aa63-d98b-4e40-9efb-00e3eff02b50-utilities\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.628452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tg77\" (UniqueName: \"kubernetes.io/projected/c232aa63-d98b-4e40-9efb-00e3eff02b50-kube-api-access-4tg77\") pod \"certified-operators-9t7gg\" (UID: \"c232aa63-d98b-4e40-9efb-00e3eff02b50\") " pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.632347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.701896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.702139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.702169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.803445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.804222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.804226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.819305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"community-operators-5rj56\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:24 crc kubenswrapper[4820]: I0221 06:52:24.872905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.017761 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9t7gg"] Feb 21 06:52:25 crc kubenswrapper[4820]: W0221 06:52:25.022229 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc232aa63_d98b_4e40_9efb_00e3eff02b50.slice/crio-998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5 WatchSource:0}: Error finding container 998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5: Status 404 returned error can't find the container with id 998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.053023 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa04064f-b88b-4b27-a882-1cbdae3d4485" containerID="68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e" exitCode=0 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.053136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerDied","Data":"68fc377fe32fabd2d7d2adaf405193300ff4510b85660287c0d549b1e9d70b8e"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.065636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78dnb" event={"ID":"ef1d43db-e76a-4d34-8528-4c549bcbc2e2","Type":"ContainerStarted","Data":"5430d4c162f7263a939de1d095ac3bf5d39348a0c744a761efac28f0cc8effb5"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.067224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"998707af78ed66e3bad9215daba75d390c2a004e297638b72ade74076a612ac5"} Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.091520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78dnb" podStartSLOduration=2.700033287 podStartE2EDuration="4.09150511s" podCreationTimestamp="2026-02-21 06:52:21 +0000 UTC" firstStartedPulling="2026-02-21 06:52:23.037859474 +0000 UTC m=+318.070943672" lastFinishedPulling="2026-02-21 06:52:24.429331297 +0000 UTC m=+319.462415495" observedRunningTime="2026-02-21 06:52:25.088554288 +0000 UTC m=+320.121638486" watchObservedRunningTime="2026-02-21 06:52:25.09150511 +0000 UTC m=+320.124589308" Feb 21 06:52:25 crc kubenswrapper[4820]: W0221 06:52:25.253047 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72aad09_5c42_41f0_9699_9160d1750191.slice/crio-a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7 WatchSource:0}: Error finding container a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7: Status 404 returned error can't find the container with id a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7 Feb 21 06:52:25 crc kubenswrapper[4820]: I0221 06:52:25.257340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.074868 4820 generic.go:334] "Generic (PLEG): container finished" podID="c232aa63-d98b-4e40-9efb-00e3eff02b50" containerID="6ed649495fca2f2c93374d5f4a1bfc1f20fe3bdb03b312cb869d809b53f75547" exitCode=0 Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.074933 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerDied","Data":"6ed649495fca2f2c93374d5f4a1bfc1f20fe3bdb03b312cb869d809b53f75547"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077007 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" exitCode=0 Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.077079 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.079777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drqmx" event={"ID":"fa04064f-b88b-4b27-a882-1cbdae3d4485","Type":"ContainerStarted","Data":"a6cf8019d4731f9995f5537b11776a6813b13eb7017f2ac9c322d6e8903279ef"} Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.135318 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drqmx" podStartSLOduration=2.770062496 podStartE2EDuration="5.13529716s" podCreationTimestamp="2026-02-21 06:52:21 +0000 UTC" firstStartedPulling="2026-02-21 06:52:23.03549813 +0000 UTC m=+318.068582328" lastFinishedPulling="2026-02-21 06:52:25.400732794 +0000 UTC m=+320.433816992" observedRunningTime="2026-02-21 06:52:26.131921666 +0000 UTC m=+321.165005864" watchObservedRunningTime="2026-02-21 06:52:26.13529716 +0000 UTC m=+321.168381378" Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.432630 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:26 crc kubenswrapper[4820]: I0221 06:52:26.432854 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" containerID="cri-o://5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" gracePeriod=30 Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.004156 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085828 4820 generic.go:334] "Generic (PLEG): container finished" podID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" exitCode=0 Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085907 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerDied","Data":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846df49455-q45hw" event={"ID":"4c1b83c5-e50f-463a-9392-497a22f7d844","Type":"ContainerDied","Data":"3c65bb6f0b7321134f5ddd184383c9786ea6ec661c4055a1127206c7f81cfbb6"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.085968 4820 scope.go:117] "RemoveContainer" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.087713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.089289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.101941 4820 scope.go:117] "RemoveContainer" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: E0221 06:52:27.102348 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": container with ID starting with 5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f not found: ID does not exist" containerID="5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.102379 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f"} err="failed to get container status \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": rpc error: code = NotFound desc = could not find container \"5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f\": container with ID starting with 5077c90477e0c196a0462ecf353eef65c9eebb5ecdcd305967b7349a8d05b05f not found: ID does not exist" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143222 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143312 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.143414 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") pod \"4c1b83c5-e50f-463a-9392-497a22f7d844\" (UID: \"4c1b83c5-e50f-463a-9392-497a22f7d844\") " Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.145989 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config" (OuterVolumeSpecName: "config") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.146635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.146660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.150547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.150898 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv" (OuterVolumeSpecName: "kube-api-access-fkbhv") pod "4c1b83c5-e50f-463a-9392-497a22f7d844" (UID: "4c1b83c5-e50f-463a-9392-497a22f7d844"). InnerVolumeSpecName "kube-api-access-fkbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245065 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1b83c5-e50f-463a-9392-497a22f7d844-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245096 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-client-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245106 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/4c1b83c5-e50f-463a-9392-497a22f7d844-kube-api-access-fkbhv\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245117 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.245125 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1b83c5-e50f-463a-9392-497a22f7d844-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.407862 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.410525 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-846df49455-q45hw"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.594700 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:27 crc kubenswrapper[4820]: E0221 06:52:27.594889 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.594902 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.595001 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" containerName="controller-manager" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.595346 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597205 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597755 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.597897 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.598058 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.600483 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.605860 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.609772 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.703623 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1b83c5-e50f-463a-9392-497a22f7d844" path="/var/lib/kubelet/pods/4c1b83c5-e50f-463a-9392-497a22f7d844/volumes" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.751215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852889 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.852963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.853022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-config\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-proxy-ca-bundles\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.854341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9899582c-a7b1-446d-93da-ea8774aafbb3-client-ca\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.860938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9899582c-a7b1-446d-93da-ea8774aafbb3-serving-cert\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.871950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggdq5\" (UniqueName: \"kubernetes.io/projected/9899582c-a7b1-446d-93da-ea8774aafbb3-kube-api-access-ggdq5\") pod \"controller-manager-7597cc6bc8-wcbhx\" (UID: \"9899582c-a7b1-446d-93da-ea8774aafbb3\") " pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:27 crc kubenswrapper[4820]: I0221 06:52:27.910878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.097093 4820 generic.go:334] "Generic (PLEG): container finished" podID="c232aa63-d98b-4e40-9efb-00e3eff02b50" containerID="696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193" exitCode=0 Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.097191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerDied","Data":"696effc6c5d724b20b99c93296e965d3d554d24e7cd5d0d836853f496f5b7193"} Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.101008 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" exitCode=0 Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.101070 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} Feb 21 06:52:28 crc kubenswrapper[4820]: I0221 06:52:28.111736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx"] Feb 21 06:52:28 crc kubenswrapper[4820]: W0221 06:52:28.122107 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9899582c_a7b1_446d_93da_ea8774aafbb3.slice/crio-2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec WatchSource:0}: Error finding container 2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec: Status 404 returned error can't find the container with id 2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.109709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9t7gg" event={"ID":"c232aa63-d98b-4e40-9efb-00e3eff02b50","Type":"ContainerStarted","Data":"761c69d1caddd72f68445ed8df3659e75dc11bab514b8c946d0870aadded4a76"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.112263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerStarted","Data":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113594 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" event={"ID":"9899582c-a7b1-446d-93da-ea8774aafbb3","Type":"ContainerStarted","Data":"50e65abb62b18cae371ab979b7f718cc96d9a006b8e312855e460bf5cc338d73"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" event={"ID":"9899582c-a7b1-446d-93da-ea8774aafbb3","Type":"ContainerStarted","Data":"2fcc555dae5574019615750a72ac737e4ed76ce8e5b3a9f7f0ecd1ffcd93bfec"} Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.113807 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.118695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.130886 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9t7gg" podStartSLOduration=2.65180823 podStartE2EDuration="5.130868469s" podCreationTimestamp="2026-02-21 06:52:24 +0000 UTC" firstStartedPulling="2026-02-21 06:52:26.077297212 +0000 UTC m=+321.110381450" lastFinishedPulling="2026-02-21 06:52:28.556357491 +0000 UTC m=+323.589441689" observedRunningTime="2026-02-21 06:52:29.12907754 +0000 UTC m=+324.162161738" watchObservedRunningTime="2026-02-21 06:52:29.130868469 +0000 UTC m=+324.163952667" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.148998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rj56" podStartSLOduration=2.696038818 podStartE2EDuration="5.148980908s" podCreationTimestamp="2026-02-21 06:52:24 +0000 UTC" firstStartedPulling="2026-02-21 06:52:26.079685478 +0000 UTC m=+321.112769716" lastFinishedPulling="2026-02-21 06:52:28.532627608 +0000 UTC m=+323.565711806" observedRunningTime="2026-02-21 06:52:29.146864149 +0000 UTC m=+324.179948347" watchObservedRunningTime="2026-02-21 06:52:29.148980908 +0000 UTC m=+324.182065106" Feb 21 06:52:29 crc kubenswrapper[4820]: I0221 06:52:29.165155 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7597cc6bc8-wcbhx" podStartSLOduration=3.165140383 podStartE2EDuration="3.165140383s" podCreationTimestamp="2026-02-21 06:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:29.163312662 +0000 UTC m=+324.196396870" watchObservedRunningTime="2026-02-21 06:52:29.165140383 +0000 UTC m=+324.198224581" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.840141 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.841435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:31 crc kubenswrapper[4820]: I0221 06:52:31.892921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.030232 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.030536 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.064689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.158723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drqmx" Feb 21 06:52:32 crc kubenswrapper[4820]: I0221 06:52:32.163994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78dnb" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.579875 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.580960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.591085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.639084 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.639141 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.684587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.759285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.791283 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.861981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.862728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4c3ac2e-5829-4263-a162-b2faf5943159-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.863938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-trusted-ca\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.864503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-certificates\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.870470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-registry-tls\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.873243 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.873276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.874228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4c3ac2e-5829-4263-a162-b2faf5943159-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.889409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-bound-sa-token\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.893914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr4k\" (UniqueName: \"kubernetes.io/projected/c4c3ac2e-5829-4263-a162-b2faf5943159-kube-api-access-hrr4k\") pod \"image-registry-66df7c8f76-p7z4h\" (UID: \"c4c3ac2e-5829-4263-a162-b2faf5943159\") " pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.920230 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:34 crc kubenswrapper[4820]: I0221 06:52:34.950831 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.189571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9t7gg" Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.194407 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rj56" Feb 21 06:52:35 crc kubenswrapper[4820]: W0221 06:52:35.359998 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c3ac2e_5829_4263_a162_b2faf5943159.slice/crio-fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b WatchSource:0}: Error finding container fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b: Status 404 returned error can't find the container with id fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b Feb 21 06:52:35 crc kubenswrapper[4820]: I0221 06:52:35.361678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p7z4h"] Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.148319 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" event={"ID":"c4c3ac2e-5829-4263-a162-b2faf5943159","Type":"ContainerStarted","Data":"5eb62fb7015f20c443d01aace01866a0aafb4943f1ef07e517b9299add9c206a"} Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.148370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" event={"ID":"c4c3ac2e-5829-4263-a162-b2faf5943159","Type":"ContainerStarted","Data":"fc55f532f889ada41a7772d899a22329639a04ceb411d32cced7994f0e37c66b"} Feb 21 06:52:36 crc kubenswrapper[4820]: I0221 06:52:36.167069 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" podStartSLOduration=2.167048747 podStartE2EDuration="2.167048747s" podCreationTimestamp="2026-02-21 06:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:52:36.16464669 +0000 UTC m=+331.197730888" watchObservedRunningTime="2026-02-21 06:52:36.167048747 +0000 UTC m=+331.200132955" Feb 21 06:52:37 crc kubenswrapper[4820]: I0221 06:52:37.154171 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:54 crc kubenswrapper[4820]: I0221 06:52:54.958818 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-p7z4h" Feb 21 06:52:55 crc kubenswrapper[4820]: I0221 06:52:55.024943 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:13 crc kubenswrapper[4820]: I0221 06:53:13.816713 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:53:13 crc kubenswrapper[4820]: I0221 06:53:13.817135 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.075870 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" containerID="cri-o://8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" gracePeriod=30 Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408391 4820 generic.go:334] "Generic (PLEG): container finished" podID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerID="8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" exitCode=0 Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerDied","Data":"8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6"} Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408742 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" event={"ID":"bcdc0b91-9179-44c7-9e5d-beb73c2b1110","Type":"ContainerDied","Data":"b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524"} Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.408764 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b598b1cdbe0f9e05c67729eff4eb4e0b676f67f494000629fbc22161406ca524" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.427493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.598373 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.611535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.611911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg" (OuterVolumeSpecName: "kube-api-access-g6nlg") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "kube-api-access-g6nlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.613802 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618313 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") pod \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\" (UID: \"bcdc0b91-9179-44c7-9e5d-beb73c2b1110\") " Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618771 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6nlg\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-kube-api-access-g6nlg\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618795 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.618810 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.620155 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.620278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.622396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.623071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bcdc0b91-9179-44c7-9e5d-beb73c2b1110" (UID: "bcdc0b91-9179-44c7-9e5d-beb73c2b1110"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720147 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720188 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720205 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:20 crc kubenswrapper[4820]: I0221 06:53:20.720223 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcdc0b91-9179-44c7-9e5d-beb73c2b1110-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.413655 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-566bt" Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.443178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.458579 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-566bt"] Feb 21 06:53:21 crc kubenswrapper[4820]: I0221 06:53:21.701924 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" path="/var/lib/kubelet/pods/bcdc0b91-9179-44c7-9e5d-beb73c2b1110/volumes" Feb 21 06:53:43 crc kubenswrapper[4820]: I0221 06:53:43.816285 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:53:43 crc kubenswrapper[4820]: I0221 06:53:43.816918 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.816724 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.818533 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.818972 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.819552 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:54:13 crc kubenswrapper[4820]: I0221 06:54:13.819676 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" gracePeriod=600 Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.749487 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" exitCode=0 Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.749596 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb"} Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.750301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} Feb 21 06:54:14 crc kubenswrapper[4820]: I0221 06:54:14.750443 4820 scope.go:117] "RemoveContainer" containerID="04395dbdc3966e272ac8672e98470fa0df681639c5f93bff6bd86f4f42a0e9eb" Feb 21 06:56:05 crc kubenswrapper[4820]: I0221 06:56:05.839770 4820 scope.go:117] "RemoveContainer" containerID="8048ccd2f14f2f271de65f71a2e6fa5f3c462cfe55114a86890015f00eed03c6" Feb 21 06:56:43 crc kubenswrapper[4820]: I0221 06:56:43.816938 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:56:43 crc kubenswrapper[4820]: I0221 06:56:43.817715 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:13 crc kubenswrapper[4820]: I0221 06:57:13.816379 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:57:13 crc kubenswrapper[4820]: I0221 06:57:13.817050 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.816674 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.819135 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.819443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.820526 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 06:57:43 crc kubenswrapper[4820]: I0221 06:57:43.820790 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" gracePeriod=600 Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203221 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" exitCode=0 Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df"} Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} Feb 21 06:57:44 crc kubenswrapper[4820]: I0221 06:57:44.203700 4820 scope.go:117] "RemoveContainer" containerID="32de9642d140b335669b1a18ad1b94d3e3f2b36b555260b47b1a72446c7842fb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.615724 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616544 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" containerID="cri-o://d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" containerID="cri-o://2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616966 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" containerID="cri-o://1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.616993 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" containerID="cri-o://e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617022 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617049 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" containerID="cri-o://50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.617078 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" containerID="cri-o://2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.674546 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" containerID="cri-o://e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" gracePeriod=30 Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.901257 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.904055 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-acl-logging/0.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.904688 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-controller/0.log" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.905076 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956729 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d924v"] Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956907 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956918 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956933 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956939 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956946 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956963 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956973 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kubecfg-setup" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kubecfg-setup" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.956990 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.956999 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957009 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957014 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957023 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957029 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957037 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957042 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957051 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957057 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957082 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957087 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957169 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957179 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-ovn-metrics" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957187 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957194 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="northd" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957211 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-acl-logging" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957217 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="kube-rbac-proxy-node" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957224 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="nbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957233 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdc0b91-9179-44c7-9e5d-beb73c2b1110" containerName="registry" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957259 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="sbdb" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957269 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovn-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957350 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957356 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: E0221 06:58:03.957365 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957371 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957446 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.957455 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70ec449-ba11-47dd-a60c-f77993670045" containerName="ovnkube-controller" Feb 21 06:58:03 crc kubenswrapper[4820]: I0221 06:58:03.959866 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035826 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035907 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035924 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.035988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036016 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036049 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash" (OuterVolumeSpecName: "host-slash") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036064 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log" (OuterVolumeSpecName: "node-log") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036092 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036134 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036195 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036219 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036251 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") pod \"a70ec449-ba11-47dd-a60c-f77993670045\" (UID: \"a70ec449-ba11-47dd-a60c-f77993670045\") " Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036580 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket" (OuterVolumeSpecName: "log-socket") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036858 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036874 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.036992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037631 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037652 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037661 4820 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037670 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037680 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037688 4820 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-log-socket\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037696 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037705 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037716 4820 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037751 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a70ec449-ba11-47dd-a60c-f77993670045-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037762 4820 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037771 4820 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-slash\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037779 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037789 4820 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037799 4820 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037809 4820 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-node-log\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.037820 4820 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.041448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx" (OuterVolumeSpecName: "kube-api-access-2wgvx") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "kube-api-access-2wgvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.041480 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.048523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a70ec449-ba11-47dd-a60c-f77993670045" (UID: "a70ec449-ba11-47dd-a60c-f77993670045"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138878 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.138971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139101 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139303 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139341 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139406 4820 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a70ec449-ba11-47dd-a60c-f77993670045-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139419 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgvx\" (UniqueName: \"kubernetes.io/projected/a70ec449-ba11-47dd-a60c-f77993670045-kube-api-access-2wgvx\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.139429 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a70ec449-ba11-47dd-a60c-f77993670045-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240198 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-ovn\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240213 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-slash\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-netd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-var-lib-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-cni-bin\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240535 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.240570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-log-socket\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241315 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-config\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-systemd\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-run-netns\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-run-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-host-kubelet\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-node-log\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.241930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-etc-openvswitch\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovnkube-script-lib\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.243306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3861e6c5-94cc-44f1-b27b-96163c33ab85-env-overrides\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.242738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3861e6c5-94cc-44f1-b27b-96163c33ab85-systemd-units\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.244357 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3861e6c5-94cc-44f1-b27b-96163c33ab85-ovn-node-metrics-cert\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.255078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtx6\" (UniqueName: \"kubernetes.io/projected/3861e6c5-94cc-44f1-b27b-96163c33ab85-kube-api-access-4qtx6\") pod \"ovnkube-node-d924v\" (UID: \"3861e6c5-94cc-44f1-b27b-96163c33ab85\") " pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.289294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.317635 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovnkube-controller/3.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.319924 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-acl-logging/0.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.321278 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bvfjp_a70ec449-ba11-47dd-a60c-f77993670045/ovn-controller/0.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322043 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322063 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322070 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322077 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322084 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322090 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" exitCode=0 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322096 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" exitCode=143 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322103 4820 generic.go:334] "Generic (PLEG): container finished" podID="a70ec449-ba11-47dd-a60c-f77993670045" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" exitCode=143 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322117 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322277 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322289 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322296 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322303 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322310 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322316 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322322 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322329 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322335 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322358 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322366 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322373 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322380 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322387 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322393 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322399 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322406 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322412 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322419 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322441 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322450 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322456 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322463 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322469 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322477 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322483 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322489 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322496 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322502 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvfjp" event={"ID":"a70ec449-ba11-47dd-a60c-f77993670045","Type":"ContainerDied","Data":"118b64efb54199ff43507f06d1575b956885db91aab695f62818a8cb0302061c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322523 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322531 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322537 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322544 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322552 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322557 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322564 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322570 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322576 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322583 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.322348 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.324120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b1c3f48ef79ebc2d14aecaf5e95da135ec3aa850d9709fde134f32e2af04e50f"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.337645 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339256 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/1.log" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339373 4820 generic.go:334] "Generic (PLEG): container finished" podID="abdb469c-ba72-4790-9ce3-785f4facbcb9" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" exitCode=2 Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerDied","Data":"03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.339527 4820 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf"} Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.340054 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.340229 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.349916 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.358657 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.361894 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvfjp"] Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.394139 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.418852 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.432473 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.444005 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.458301 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.504040 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.515401 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.528810 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.542771 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543170 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543201 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543225 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543627 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543668 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.543694 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.543964 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544174 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544283 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.544649 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544674 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.544688 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545045 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545133 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545204 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545556 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545580 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545595 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.545913 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545947 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.545967 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.546306 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546378 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546450 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.546761 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546786 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.546801 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: E0221 06:58:04.547018 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547040 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547054 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547260 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547285 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547458 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547478 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547647 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547672 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547849 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.547872 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548063 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548087 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548395 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548468 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548777 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.548852 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549150 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549223 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549593 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549663 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.549957 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550022 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550328 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550355 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550628 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550651 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550922 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.550946 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551212 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551245 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551542 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.551795 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552054 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552130 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552436 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552454 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552661 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552691 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552874 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.552900 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553153 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553171 4820 scope.go:117] "RemoveContainer" containerID="8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553353 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f"} err="failed to get container status \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": rpc error: code = NotFound desc = could not find container \"8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f\": container with ID starting with 8d545d09c212578bfca5e468f72c405a06f254a5e24a970a0196846c38c1970f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553374 4820 scope.go:117] "RemoveContainer" containerID="2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553588 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d"} err="failed to get container status \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": rpc error: code = NotFound desc = could not find container \"2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d\": container with ID starting with 2346d0bbf86ae6efdc56c3cf11c8bd11867d012f4a456a4c0f079beff1985e8d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.553659 4820 scope.go:117] "RemoveContainer" containerID="1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554052 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d"} err="failed to get container status \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": rpc error: code = NotFound desc = could not find container \"1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d\": container with ID starting with 1160eeb9fb5f15a03c3dbb177f28ed6ac5f8c3b4ab17537893f48667df87d77d not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554074 4820 scope.go:117] "RemoveContainer" containerID="e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554279 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c"} err="failed to get container status \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": rpc error: code = NotFound desc = could not find container \"e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c\": container with ID starting with e13502ca283a9a882c31412b3c952f432c5c1a51a9a5cf6169616ef385f4ab5c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554361 4820 scope.go:117] "RemoveContainer" containerID="8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f"} err="failed to get container status \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": rpc error: code = NotFound desc = could not find container \"8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f\": container with ID starting with 8b682fc4670fb10d09aa9ea6edb1be75acae78d91e43e39d4e7827012fc4897f not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.554763 4820 scope.go:117] "RemoveContainer" containerID="50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555015 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb"} err="failed to get container status \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": rpc error: code = NotFound desc = could not find container \"50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb\": container with ID starting with 50a50a77a4c8bc014e965d0552cde17ac13ae98e660b85fe92ec60f34d66f0cb not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555053 4820 scope.go:117] "RemoveContainer" containerID="2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555321 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553"} err="failed to get container status \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": rpc error: code = NotFound desc = could not find container \"2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553\": container with ID starting with 2cd806d5e56c1b85b4057b87585cb0243de4b06357939f8fc747bcba2b2e8553 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555421 4820 scope.go:117] "RemoveContainer" containerID="d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555691 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29"} err="failed to get container status \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": rpc error: code = NotFound desc = could not find container \"d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29\": container with ID starting with d0a25d74fa8f12bdddbc0d2c51d01fca2a95e2f76513739dc8b425917dc8ea29 not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.555764 4820 scope.go:117] "RemoveContainer" containerID="e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556091 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c"} err="failed to get container status \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": rpc error: code = NotFound desc = could not find container \"e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c\": container with ID starting with e63669cca8c6e9241d61e42e6eb96393a1a1faab45d72567dd9bf3ac2381114c not found: ID does not exist" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556159 4820 scope.go:117] "RemoveContainer" containerID="e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be" Feb 21 06:58:04 crc kubenswrapper[4820]: I0221 06:58:04.556496 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be"} err="failed to get container status \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": rpc error: code = NotFound desc = could not find container \"e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be\": container with ID starting with e9606ecc422cbb7526dafa08f837395ecce9c65540477d16c9953716cd44a2be not found: ID does not exist" Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.349402 4820 generic.go:334] "Generic (PLEG): container finished" podID="3861e6c5-94cc-44f1-b27b-96163c33ab85" containerID="ee6f5fa8750a1ee1efcba99330f9bb791e174552d0c87f21c98cd536e693c862" exitCode=0 Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.349454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerDied","Data":"ee6f5fa8750a1ee1efcba99330f9bb791e174552d0c87f21c98cd536e693c862"} Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.709382 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70ec449-ba11-47dd-a60c-f77993670045" path="/var/lib/kubelet/pods/a70ec449-ba11-47dd-a60c-f77993670045/volumes" Feb 21 06:58:05 crc kubenswrapper[4820]: I0221 06:58:05.896454 4820 scope.go:117] "RemoveContainer" containerID="e93f73bee750ca306afd3ddfa94a8774827f6fdb741f6647a1d12d84aec0eadf" Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"6d4dc05deddf61f0b8daccaf3beab53531d75cd199d87e0b70b97f256c6c01ba"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"d09400320f3883a4352f27a47ffe4d03d977a6d7b40e67b3560bb9452ee99886"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b6ed13dfdcb6e914f2130d66e24190d91eccbda618a97ed7f70f83825b40f0b4"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358828 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"2cb2bd6d50c8de18b59cc21b6a0c4887e7ebd696f2a63479a4c81c810822d2e9"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"b54362445fc40925aa125900cd29e1c644294f051192bcf5879cfac01bdf6c15"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.358846 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"0b5c1f4165e08b1a3efa1246bf1a60990f1a69593041a25746961924a3c923c6"} Feb 21 06:58:06 crc kubenswrapper[4820]: I0221 06:58:06.359987 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:08 crc kubenswrapper[4820]: I0221 06:58:08.378310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"aeb6e959ef2d828f5d648dd9c17bd8168f92c3984f4453b4a892180018b8640f"} Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.187515 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.188477 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190622 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190675 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.190622 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.192883 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.298174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.399201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.400779 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.401473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.421523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"crc-storage-crc-p4pxl\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: I0221 06:58:09.506721 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530299 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530395 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530433 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:09 crc kubenswrapper[4820]: E0221 06:58:09.530556 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(1def9586a0ca423895b172ba36815c010e2e9b8442298941cd3998e549cce198): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.234543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.234908 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.235302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273623 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273681 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273701 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:11 crc kubenswrapper[4820]: E0221 06:58:11.273738 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(c951b49ef014d0744f68703d3f4268ea9ba4e457487aa8f9a381e86f5880f477): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.397340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" event={"ID":"3861e6c5-94cc-44f1-b27b-96163c33ab85","Type":"ContainerStarted","Data":"d38071d605b64db3a9a45cc342615932511de64d66b67e3f79ee517d8edba6a2"} Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.397977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.398087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.447215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:11 crc kubenswrapper[4820]: I0221 06:58:11.516536 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" podStartSLOduration=8.516515383 podStartE2EDuration="8.516515383s" podCreationTimestamp="2026-02-21 06:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:58:11.437288349 +0000 UTC m=+666.470372547" watchObservedRunningTime="2026-02-21 06:58:11.516515383 +0000 UTC m=+666.549599581" Feb 21 06:58:12 crc kubenswrapper[4820]: I0221 06:58:12.404464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:12 crc kubenswrapper[4820]: I0221 06:58:12.482754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:18 crc kubenswrapper[4820]: I0221 06:58:18.696558 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:18 crc kubenswrapper[4820]: E0221 06:58:18.697037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-94gxr_openshift-multus(abdb469c-ba72-4790-9ce3-785f4facbcb9)\"" pod="openshift-multus/multus-94gxr" podUID="abdb469c-ba72-4790-9ce3-785f4facbcb9" Feb 21 06:58:23 crc kubenswrapper[4820]: I0221 06:58:23.696548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: I0221 06:58:23.697753 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726449 4820 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726543 4820 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726570 4820 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:23 crc kubenswrapper[4820]: E0221 06:58:23.726645 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-p4pxl_crc-storage(3c764255-4b53-476b-ad40-4bd38c76f92c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-p4pxl_crc-storage_3c764255-4b53-476b-ad40-4bd38c76f92c_0(01436420e71c84eaf8b34ca0f5c12a30c169271298b617d7d707599313ca50e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-p4pxl" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" Feb 21 06:58:29 crc kubenswrapper[4820]: I0221 06:58:29.696921 4820 scope.go:117] "RemoveContainer" containerID="03d0a6e2d37266d0266ccb9f72a6efebcd4bdac32c4b5bd8e9b6a73ba841b1e2" Feb 21 06:58:30 crc kubenswrapper[4820]: I0221 06:58:30.526585 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-94gxr_abdb469c-ba72-4790-9ce3-785f4facbcb9/kube-multus/2.log" Feb 21 06:58:30 crc kubenswrapper[4820]: I0221 06:58:30.526904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-94gxr" event={"ID":"abdb469c-ba72-4790-9ce3-785f4facbcb9","Type":"ContainerStarted","Data":"3d9b631313cf6fc11b87dd9d120ece7594f828813adbf98746fe417b673ae9ba"} Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.319352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d924v" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.696359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.697136 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.948194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 06:58:34 crc kubenswrapper[4820]: I0221 06:58:34.954613 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 06:58:35 crc kubenswrapper[4820]: I0221 06:58:35.558911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerStarted","Data":"d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa"} Feb 21 06:58:37 crc kubenswrapper[4820]: I0221 06:58:37.571308 4820 generic.go:334] "Generic (PLEG): container finished" podID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerID="edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374" exitCode=0 Feb 21 06:58:37 crc kubenswrapper[4820]: I0221 06:58:37.571394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerDied","Data":"edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374"} Feb 21 06:58:38 crc kubenswrapper[4820]: I0221 06:58:38.847598 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") pod \"3c764255-4b53-476b-ad40-4bd38c76f92c\" (UID: \"3c764255-4b53-476b-ad40-4bd38c76f92c\") " Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.034288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.041075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj" (OuterVolumeSpecName: "kube-api-access-r9lqj") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "kube-api-access-r9lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.055957 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3c764255-4b53-476b-ad40-4bd38c76f92c" (UID: "3c764255-4b53-476b-ad40-4bd38c76f92c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135495 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3c764255-4b53-476b-ad40-4bd38c76f92c-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135544 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3c764255-4b53-476b-ad40-4bd38c76f92c-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.135565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lqj\" (UniqueName: \"kubernetes.io/projected/3c764255-4b53-476b-ad40-4bd38c76f92c-kube-api-access-r9lqj\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p4pxl" event={"ID":"3c764255-4b53-476b-ad40-4bd38c76f92c","Type":"ContainerDied","Data":"d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa"} Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587143 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2546f2342679c6275e5e092254a7ea71f67352d551a33a3c38e63858eb43dfa" Feb 21 06:58:39 crc kubenswrapper[4820]: I0221 06:58:39.587223 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p4pxl" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.915755 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:45 crc kubenswrapper[4820]: E0221 06:58:45.917026 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.917119 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.917299 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" containerName="storage" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.918062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.920700 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 06:58:45 crc kubenswrapper[4820]: I0221 06:58:45.936233 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.020706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.121920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.122213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.123407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.141038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.274071 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.450597 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx"] Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.628169 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerStarted","Data":"a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094"} Feb 21 06:58:46 crc kubenswrapper[4820]: I0221 06:58:46.628487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerStarted","Data":"1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387"} Feb 21 06:58:47 crc kubenswrapper[4820]: I0221 06:58:47.635496 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094" exitCode=0 Feb 21 06:58:47 crc kubenswrapper[4820]: I0221 06:58:47.635557 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"a99ab81eb11638cf6c5a9282d9293b3d813474cdd0dc87949c6fb84e27a90094"} Feb 21 06:58:49 crc kubenswrapper[4820]: I0221 06:58:49.647789 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="bc1a66108aaac39e7b2ac18192ce7070814f68091053332da91727692ceb8f51" exitCode=0 Feb 21 06:58:49 crc kubenswrapper[4820]: I0221 06:58:49.647853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"bc1a66108aaac39e7b2ac18192ce7070814f68091053332da91727692ceb8f51"} Feb 21 06:58:50 crc kubenswrapper[4820]: I0221 06:58:50.658746 4820 generic.go:334] "Generic (PLEG): container finished" podID="9e889767-aefe-4149-8677-fd116ae8d598" containerID="af7bd2bf9bde375bf6eb49b1f68e0eb467edc8c3c3fc4d789d5c3f1e133b0407" exitCode=0 Feb 21 06:58:50 crc kubenswrapper[4820]: I0221 06:58:50.658848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"af7bd2bf9bde375bf6eb49b1f68e0eb467edc8c3c3fc4d789d5c3f1e133b0407"} Feb 21 06:58:51 crc kubenswrapper[4820]: I0221 06:58:51.917114 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.089783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") pod \"9e889767-aefe-4149-8677-fd116ae8d598\" (UID: \"9e889767-aefe-4149-8677-fd116ae8d598\") " Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.090385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle" (OuterVolumeSpecName: "bundle") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.096082 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr" (OuterVolumeSpecName: "kube-api-access-dqxzr") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "kube-api-access-dqxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.191152 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.191198 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxzr\" (UniqueName: \"kubernetes.io/projected/9e889767-aefe-4149-8677-fd116ae8d598-kube-api-access-dqxzr\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.277364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util" (OuterVolumeSpecName: "util") pod "9e889767-aefe-4149-8677-fd116ae8d598" (UID: "9e889767-aefe-4149-8677-fd116ae8d598"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.292633 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e889767-aefe-4149-8677-fd116ae8d598-util\") on node \"crc\" DevicePath \"\"" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672441 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" event={"ID":"9e889767-aefe-4149-8677-fd116ae8d598","Type":"ContainerDied","Data":"1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387"} Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672482 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx" Feb 21 06:58:52 crc kubenswrapper[4820]: I0221 06:58:52.672486 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac499dbdefe4bb5ee6d8b1f5574e58f12e93f7d2c75f7f28d0ddb0e456a0387" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.591811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.591994 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="util" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="util" Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.592012 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="pull" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592018 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="pull" Feb 21 06:58:54 crc kubenswrapper[4820]: E0221 06:58:54.592036 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592042 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592127 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e889767-aefe-4149-8677-fd116ae8d598" containerName="extract" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.592464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.603482 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.604109 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.603859 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-985r8" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.609946 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.757071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.858775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.875170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lj2\" (UniqueName: \"kubernetes.io/projected/375887b5-9d2e-4af8-9128-789ebd290f97-kube-api-access-l4lj2\") pod \"nmstate-operator-694c9596b7-4rnft\" (UID: \"375887b5-9d2e-4af8-9128-789ebd290f97\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:54 crc kubenswrapper[4820]: I0221 06:58:54.964530 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" Feb 21 06:58:55 crc kubenswrapper[4820]: I0221 06:58:55.129948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4rnft"] Feb 21 06:58:55 crc kubenswrapper[4820]: I0221 06:58:55.692498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" event={"ID":"375887b5-9d2e-4af8-9128-789ebd290f97","Type":"ContainerStarted","Data":"e3c8550742b9818c3c05d7bfdcb5bfabbbc87b0aaa55800b5253bb606c1038c5"} Feb 21 06:58:57 crc kubenswrapper[4820]: I0221 06:58:57.707796 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" event={"ID":"375887b5-9d2e-4af8-9128-789ebd290f97","Type":"ContainerStarted","Data":"94b9282885a29e1e65024a337441fe18a0cf000686ec361502c2bac2b70f200a"} Feb 21 06:58:57 crc kubenswrapper[4820]: I0221 06:58:57.733221 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-4rnft" podStartSLOduration=1.6844870410000001 podStartE2EDuration="3.733192858s" podCreationTimestamp="2026-02-21 06:58:54 +0000 UTC" firstStartedPulling="2026-02-21 06:58:55.142903064 +0000 UTC m=+710.175987262" lastFinishedPulling="2026-02-21 06:58:57.191608881 +0000 UTC m=+712.224693079" observedRunningTime="2026-02-21 06:58:57.732981993 +0000 UTC m=+712.766066201" watchObservedRunningTime="2026-02-21 06:58:57.733192858 +0000 UTC m=+712.766277116" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.659123 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.659940 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.666450 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c7pvt" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.678232 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.681277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.682069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.683792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.692039 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tz942"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.692683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.698987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.819113 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.820005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.821731 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.822220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.822415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wlznm" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.827950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.828069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.829194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.833010 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:58 crc kubenswrapper[4820]: E0221 06:58:58.930707 4820 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: E0221 06:58:58.930780 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair podName:62b9a00a-9b7e-4057-bc85-2a16c48957f4 nodeName:}" failed. No retries permitted until 2026-02-21 06:58:59.430760124 +0000 UTC m=+714.463844322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair") pod "nmstate-webhook-866bcb46dc-c8gmp" (UID: "62b9a00a-9b7e-4057-bc85-2a16c48957f4") : secret "openshift-nmstate-webhook" not found Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930874 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-ovs-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.930984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.931020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-nmstate-lock\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.931173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a6c76731-bd23-43eb-84f6-84d675965035-dbus-socket\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.952429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtlh\" (UniqueName: \"kubernetes.io/projected/b7930d8a-8ded-4552-9c0a-aa73fa2006e2-kube-api-access-jqtlh\") pod \"nmstate-metrics-58c85c668d-m6svj\" (UID: \"b7930d8a-8ded-4552-9c0a-aa73fa2006e2\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.959876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zv7f\" (UniqueName: \"kubernetes.io/projected/a6c76731-bd23-43eb-84f6-84d675965035-kube-api-access-2zv7f\") pod \"nmstate-handler-tz942\" (UID: \"a6c76731-bd23-43eb-84f6-84d675965035\") " pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.964856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctf6\" (UniqueName: \"kubernetes.io/projected/62b9a00a-9b7e-4057-bc85-2a16c48957f4-kube-api-access-9ctf6\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:58 crc kubenswrapper[4820]: I0221 06:58:58.974114 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.026049 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.026702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032107 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: E0221 06:58:59.032175 4820 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 21 06:58:59 crc kubenswrapper[4820]: E0221 06:58:59.032228 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert podName:15902f84-d2f7-42a0-929e-89c21cffddd8 nodeName:}" failed. No retries permitted until 2026-02-21 06:58:59.532212907 +0000 UTC m=+714.565297105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-b5kf2" (UID: "15902f84-d2f7-42a0-929e-89c21cffddd8") : secret "plugin-serving-cert" not found Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.032955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15902f84-d2f7-42a0-929e-89c21cffddd8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.062443 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.066482 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.081038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8zv\" (UniqueName: \"kubernetes.io/projected/15902f84-d2f7-42a0-929e-89c21cffddd8-kube-api-access-wl8zv\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.134325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.234995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.235474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.236254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-oauth-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.239922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-service-ca\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.240396 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-trusted-ca-bundle\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.240665 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.241140 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-oauth-config\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.242280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-console-serving-cert\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.250539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wltt\" (UniqueName: \"kubernetes.io/projected/7df248d1-5aca-4b9d-97f5-9e4ff67ef219-kube-api-access-6wltt\") pod \"console-5698ddd759-6nvxq\" (UID: \"7df248d1-5aca-4b9d-97f5-9e4ff67ef219\") " pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.277691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-m6svj"] Feb 21 06:58:59 crc kubenswrapper[4820]: W0221 06:58:59.281830 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7930d8a_8ded_4552_9c0a_aa73fa2006e2.slice/crio-264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38 WatchSource:0}: Error finding container 264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38: Status 404 returned error can't find the container with id 264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38 Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.339211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.439992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.443223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62b9a00a-9b7e-4057-bc85-2a16c48957f4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-c8gmp\" (UID: \"62b9a00a-9b7e-4057-bc85-2a16c48957f4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.485706 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-6nvxq"] Feb 21 06:58:59 crc kubenswrapper[4820]: W0221 06:58:59.490669 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df248d1_5aca_4b9d_97f5_9e4ff67ef219.slice/crio-ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf WatchSource:0}: Error finding container ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf: Status 404 returned error can't find the container with id ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.540920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.543747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15902f84-d2f7-42a0-929e-89c21cffddd8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-b5kf2\" (UID: \"15902f84-d2f7-42a0-929e-89c21cffddd8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.649693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.735553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.745537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-6nvxq" event={"ID":"7df248d1-5aca-4b9d-97f5-9e4ff67ef219","Type":"ContainerStarted","Data":"f7081e9cdff96c1c1af93c0e1c2819be80f9df08689fdde3d6f3185dc62ab219"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.745575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-6nvxq" event={"ID":"7df248d1-5aca-4b9d-97f5-9e4ff67ef219","Type":"ContainerStarted","Data":"ca96de8ac3f2ba46ff645484b5a0c61b515c401ce517948f9f24c24b4f4dddaf"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.748300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"264b630e8e93fb7f96b0795d4f0ac2369d905121f4316798b979f5b108de7e38"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.749229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tz942" event={"ID":"a6c76731-bd23-43eb-84f6-84d675965035","Type":"ContainerStarted","Data":"22adcde67782a676db42dc3bb6263f558f61749b7265350ca96293b7d510cc39"} Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.767273 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5698ddd759-6nvxq" podStartSLOduration=1.767254103 podStartE2EDuration="1.767254103s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 06:58:59.764768505 +0000 UTC m=+714.797852703" watchObservedRunningTime="2026-02-21 06:58:59.767254103 +0000 UTC m=+714.800338301" Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.902553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp"] Feb 21 06:58:59 crc kubenswrapper[4820]: I0221 06:58:59.976084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2"] Feb 21 06:59:00 crc kubenswrapper[4820]: I0221 06:59:00.756279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" event={"ID":"62b9a00a-9b7e-4057-bc85-2a16c48957f4","Type":"ContainerStarted","Data":"0961319806db638be7dacbf7d5df3428b961486cc05353c81b9ba146c2afcdbf"} Feb 21 06:59:00 crc kubenswrapper[4820]: I0221 06:59:00.757334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" event={"ID":"15902f84-d2f7-42a0-929e-89c21cffddd8","Type":"ContainerStarted","Data":"bb7f418cf5ab07e0d8ed6fafd7c7ee48ae84db1961f292dffa72ab01cf1da892"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.763469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"126c5081a29b742b19beb4d9030d81abb61fa053a1711c3a958c64c1addbcca9"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.766024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tz942" event={"ID":"a6c76731-bd23-43eb-84f6-84d675965035","Type":"ContainerStarted","Data":"f847d0eb4586d68de909563cb431d9be88e7983b82d431e38e1e474023ec961c"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.766175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.768805 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" event={"ID":"62b9a00a-9b7e-4057-bc85-2a16c48957f4","Type":"ContainerStarted","Data":"e3cfb83da897c6e4c413869ade397ade6c24e72116d1b515b38491d7ae8476e1"} Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.768943 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:59:01 crc kubenswrapper[4820]: I0221 06:59:01.779626 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tz942" podStartSLOduration=1.443736558 podStartE2EDuration="3.779610685s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.118104065 +0000 UTC m=+714.151188273" lastFinishedPulling="2026-02-21 06:59:01.453978202 +0000 UTC m=+716.487062400" observedRunningTime="2026-02-21 06:59:01.777550428 +0000 UTC m=+716.810634636" watchObservedRunningTime="2026-02-21 06:59:01.779610685 +0000 UTC m=+716.812694883" Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.778585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" event={"ID":"15902f84-d2f7-42a0-929e-89c21cffddd8","Type":"ContainerStarted","Data":"11b0f0d8897db0b87de016191e55d070318bf0c4ff5531dbbbdf2cd6ce9dd341"} Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.795916 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-b5kf2" podStartSLOduration=2.436071996 podStartE2EDuration="4.795892524s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.990218333 +0000 UTC m=+715.023302531" lastFinishedPulling="2026-02-21 06:59:02.350038871 +0000 UTC m=+717.383123059" observedRunningTime="2026-02-21 06:59:02.789695531 +0000 UTC m=+717.822779739" watchObservedRunningTime="2026-02-21 06:59:02.795892524 +0000 UTC m=+717.828976722" Feb 21 06:59:02 crc kubenswrapper[4820]: I0221 06:59:02.796805 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" podStartSLOduration=3.253596514 podStartE2EDuration="4.79679664s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.912417492 +0000 UTC m=+714.945501690" lastFinishedPulling="2026-02-21 06:59:01.455617588 +0000 UTC m=+716.488701816" observedRunningTime="2026-02-21 06:59:01.813610506 +0000 UTC m=+716.846694694" watchObservedRunningTime="2026-02-21 06:59:02.79679664 +0000 UTC m=+717.829880838" Feb 21 06:59:03 crc kubenswrapper[4820]: I0221 06:59:03.784325 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" event={"ID":"b7930d8a-8ded-4552-9c0a-aa73fa2006e2","Type":"ContainerStarted","Data":"0fdb247c12bc64fa564d3e792e63e1b7d5d8fa0745aaa9536eeea87648423351"} Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.095970 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tz942" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.114879 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-m6svj" podStartSLOduration=6.850332706 podStartE2EDuration="11.114860345s" podCreationTimestamp="2026-02-21 06:58:58 +0000 UTC" firstStartedPulling="2026-02-21 06:58:59.283851981 +0000 UTC m=+714.316936179" lastFinishedPulling="2026-02-21 06:59:03.5483796 +0000 UTC m=+718.581463818" observedRunningTime="2026-02-21 06:59:03.805059565 +0000 UTC m=+718.838143833" watchObservedRunningTime="2026-02-21 06:59:09.114860345 +0000 UTC m=+724.147944553" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.340262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.340360 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.346519 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.828935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5698ddd759-6nvxq" Feb 21 06:59:09 crc kubenswrapper[4820]: I0221 06:59:09.887172 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:19 crc kubenswrapper[4820]: I0221 06:59:19.656987 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-c8gmp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.094857 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.096840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.099428 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.101055 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.236835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.338931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.365492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.417058 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.628582 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp"] Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.937895 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="894b2292bfb37d901f11037b3cc99f84ebbea287d26c0748bf1681cd71062222" exitCode=0 Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.937948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"894b2292bfb37d901f11037b3cc99f84ebbea287d26c0748bf1681cd71062222"} Feb 21 06:59:31 crc kubenswrapper[4820]: I0221 06:59:31.938228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerStarted","Data":"574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06"} Feb 21 06:59:33 crc kubenswrapper[4820]: I0221 06:59:33.951215 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="a2cfb57cc71230dd50799736f8b501f5fc48e2d0c48bbc9223d0d47bc57ab51c" exitCode=0 Feb 21 06:59:33 crc kubenswrapper[4820]: I0221 06:59:33.951573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"a2cfb57cc71230dd50799736f8b501f5fc48e2d0c48bbc9223d0d47bc57ab51c"} Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.940307 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cgbzf" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" containerID="cri-o://abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" gracePeriod=15 Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.965735 4820 generic.go:334] "Generic (PLEG): container finished" podID="2e4047bc-d968-4163-82f1-13cecd18893e" containerID="a8bb11fad7ebef1399ca38d060a61becc5101d9b7e120c9d16f0a9f0880826af" exitCode=0 Feb 21 06:59:34 crc kubenswrapper[4820]: I0221 06:59:34.965778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"a8bb11fad7ebef1399ca38d060a61becc5101d9b7e120c9d16f0a9f0880826af"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.265979 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgbzf_18b46a58-b11c-4760-bd38-1c875c4ecf21/console/0.log" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.266285 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") pod \"18b46a58-b11c-4760-bd38-1c875c4ecf21\" (UID: \"18b46a58-b11c-4760-bd38-1c875c4ecf21\") " Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.387926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388101 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config" (OuterVolumeSpecName: "console-config") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388365 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.388429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca" (OuterVolumeSpecName: "service-ca") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.392734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.392969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.393007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r" (OuterVolumeSpecName: "kube-api-access-xh87r") pod "18b46a58-b11c-4760-bd38-1c875c4ecf21" (UID: "18b46a58-b11c-4760-bd38-1c875c4ecf21"). InnerVolumeSpecName "kube-api-access-xh87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489372 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489398 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489406 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18b46a58-b11c-4760-bd38-1c875c4ecf21-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489414 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-service-ca\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489422 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh87r\" (UniqueName: \"kubernetes.io/projected/18b46a58-b11c-4760-bd38-1c875c4ecf21-kube-api-access-xh87r\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.489433 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b46a58-b11c-4760-bd38-1c875c4ecf21-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971793 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgbzf_18b46a58-b11c-4760-bd38-1c875c4ecf21/console/0.log" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971847 4820 generic.go:334] "Generic (PLEG): container finished" podID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" exitCode=2 Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgbzf" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.971944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerDied","Data":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.972010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgbzf" event={"ID":"18b46a58-b11c-4760-bd38-1c875c4ecf21","Type":"ContainerDied","Data":"0767d187d2981c7d5f1c668b318887301f7e5326b2d0aaa6f0c17cc8530104d7"} Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.972038 4820 scope.go:117] "RemoveContainer" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.994633 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.996614 4820 scope.go:117] "RemoveContainer" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: E0221 06:59:35.997078 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": container with ID starting with abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25 not found: ID does not exist" containerID="abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25" Feb 21 06:59:35 crc kubenswrapper[4820]: I0221 06:59:35.997220 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25"} err="failed to get container status \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": rpc error: code = NotFound desc = could not find container \"abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25\": container with ID starting with abba477e346b677224bf652c1b936b36a268f38e9dbefffa59d68d4047f25e25 not found: ID does not exist" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.001750 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cgbzf"] Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.219595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399459 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.399568 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") pod \"2e4047bc-d968-4163-82f1-13cecd18893e\" (UID: \"2e4047bc-d968-4163-82f1-13cecd18893e\") " Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.401277 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle" (OuterVolumeSpecName: "bundle") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.403345 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh" (OuterVolumeSpecName: "kube-api-access-9mnqh") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "kube-api-access-9mnqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.419310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util" (OuterVolumeSpecName: "util") pod "2e4047bc-d968-4163-82f1-13cecd18893e" (UID: "2e4047bc-d968-4163-82f1-13cecd18893e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501535 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-util\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501584 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e4047bc-d968-4163-82f1-13cecd18893e-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.501594 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mnqh\" (UniqueName: \"kubernetes.io/projected/2e4047bc-d968-4163-82f1-13cecd18893e-kube-api-access-9mnqh\") on node \"crc\" DevicePath \"\"" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" event={"ID":"2e4047bc-d968-4163-82f1-13cecd18893e","Type":"ContainerDied","Data":"574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06"} Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987107 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574f780f0fa36ae610634245afd7290ae3dc549b1d7c077b4743d15e1603aa06" Feb 21 06:59:36 crc kubenswrapper[4820]: I0221 06:59:36.987151 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp" Feb 21 06:59:37 crc kubenswrapper[4820]: I0221 06:59:37.701884 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" path="/var/lib/kubelet/pods/18b46a58-b11c-4760-bd38-1c875c4ecf21/volumes" Feb 21 06:59:41 crc kubenswrapper[4820]: I0221 06:59:41.729397 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286258 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286817 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286832 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="pull" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286839 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="pull" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286849 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286858 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: E0221 06:59:45.286879 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="util" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.286887 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="util" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287011 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4047bc-d968-4163-82f1-13cecd18893e" containerName="extract" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287026 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b46a58-b11c-4760-bd38-1c875c4ecf21" containerName="console" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.287582 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.288968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.290354 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.290581 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.291207 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5b4d4" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.292719 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.299853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.422955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.423067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.423194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.502622 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.503353 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505155 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505350 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.505782 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8bptw" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.516702 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.524095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.529958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-webhook-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.529973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-apiservice-cert\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.544102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfb59\" (UniqueName: \"kubernetes.io/projected/bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d-kube-api-access-sfb59\") pod \"metallb-operator-controller-manager-9bd6bbfc6-srwvl\" (UID: \"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d\") " pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.605093 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.624997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.728699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.745341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-apiservice-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.748317 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-webhook-cert\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.754659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdm8\" (UniqueName: \"kubernetes.io/projected/e17110d4-51ce-4fca-a5e7-ba4eedeb42a8-kube-api-access-rwdm8\") pod \"metallb-operator-webhook-server-c68698666-cvwrd\" (UID: \"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8\") " pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.816875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 06:59:45 crc kubenswrapper[4820]: I0221 06:59:45.827394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl"] Feb 21 06:59:45 crc kubenswrapper[4820]: W0221 06:59:45.837703 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc44a7fe_6bdf_4d85_a2aa_aeafa3d1d74d.slice/crio-8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3 WatchSource:0}: Error finding container 8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3: Status 404 returned error can't find the container with id 8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3 Feb 21 06:59:46 crc kubenswrapper[4820]: I0221 06:59:46.025056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c68698666-cvwrd"] Feb 21 06:59:46 crc kubenswrapper[4820]: W0221 06:59:46.030770 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17110d4_51ce_4fca_a5e7_ba4eedeb42a8.slice/crio-ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec WatchSource:0}: Error finding container ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec: Status 404 returned error can't find the container with id ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec Feb 21 06:59:46 crc kubenswrapper[4820]: I0221 06:59:46.031869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" event={"ID":"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d","Type":"ContainerStarted","Data":"8c3850f244d7a2bf9a5aa030cbe62d857a5c99da1b202a398ef445da782505d3"} Feb 21 06:59:47 crc kubenswrapper[4820]: I0221 06:59:47.046599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" event={"ID":"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8","Type":"ContainerStarted","Data":"ed133935f4713c12cd8565aec958fe1b8559777bb7147d14c65860cf34d1baec"} Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.059163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" event={"ID":"bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d","Type":"ContainerStarted","Data":"5d2b4cef7150794e3a2371d89dc9e308d2184074f06af2a1ae6bae14a39ad714"} Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.059673 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 06:59:49 crc kubenswrapper[4820]: I0221 06:59:49.084353 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" podStartSLOduration=1.3092193189999999 podStartE2EDuration="4.084333108s" podCreationTimestamp="2026-02-21 06:59:45 +0000 UTC" firstStartedPulling="2026-02-21 06:59:45.839557846 +0000 UTC m=+760.872642044" lastFinishedPulling="2026-02-21 06:59:48.614671635 +0000 UTC m=+763.647755833" observedRunningTime="2026-02-21 06:59:49.076056951 +0000 UTC m=+764.109141150" watchObservedRunningTime="2026-02-21 06:59:49.084333108 +0000 UTC m=+764.117417326" Feb 21 06:59:51 crc kubenswrapper[4820]: I0221 06:59:51.075870 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" event={"ID":"e17110d4-51ce-4fca-a5e7-ba4eedeb42a8","Type":"ContainerStarted","Data":"6cfd838986ea6b549ace2ad3be7626e9a02cc705e7d23111040c6004fa7c8f36"} Feb 21 06:59:51 crc kubenswrapper[4820]: I0221 06:59:51.076262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.191458 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" podStartSLOduration=11.286517671 podStartE2EDuration="15.191440576s" podCreationTimestamp="2026-02-21 06:59:45 +0000 UTC" firstStartedPulling="2026-02-21 06:59:46.035750999 +0000 UTC m=+761.068835197" lastFinishedPulling="2026-02-21 06:59:49.940673904 +0000 UTC m=+764.973758102" observedRunningTime="2026-02-21 06:59:51.100294395 +0000 UTC m=+766.133378593" watchObservedRunningTime="2026-02-21 07:00:00.191440576 +0000 UTC m=+775.224524774" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.196954 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.197754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.201124 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.202215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.220218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.303474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405410 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.405438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.406438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.412184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.432993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"collect-profiles-29527620-dh5dn\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.513543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:00 crc kubenswrapper[4820]: I0221 07:00:00.906831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.136292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerStarted","Data":"5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758"} Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.136601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerStarted","Data":"7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9"} Feb 21 07:00:01 crc kubenswrapper[4820]: I0221 07:00:01.152776 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" podStartSLOduration=1.152761065 podStartE2EDuration="1.152761065s" podCreationTimestamp="2026-02-21 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:01.150648758 +0000 UTC m=+776.183732956" watchObservedRunningTime="2026-02-21 07:00:01.152761065 +0000 UTC m=+776.185845263" Feb 21 07:00:02 crc kubenswrapper[4820]: I0221 07:00:02.143501 4820 generic.go:334] "Generic (PLEG): container finished" podID="54597218-e332-4423-adc0-b4be2977a4ce" containerID="5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758" exitCode=0 Feb 21 07:00:02 crc kubenswrapper[4820]: I0221 07:00:02.143549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerDied","Data":"5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758"} Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.376089 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") pod \"54597218-e332-4423-adc0-b4be2977a4ce\" (UID: \"54597218-e332-4423-adc0-b4be2977a4ce\") " Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.545862 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.546192 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54597218-e332-4423-adc0-b4be2977a4ce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.549814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.549816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql" (OuterVolumeSpecName: "kube-api-access-tcbql") pod "54597218-e332-4423-adc0-b4be2977a4ce" (UID: "54597218-e332-4423-adc0-b4be2977a4ce"). InnerVolumeSpecName "kube-api-access-tcbql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.647456 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbql\" (UniqueName: \"kubernetes.io/projected/54597218-e332-4423-adc0-b4be2977a4ce-kube-api-access-tcbql\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:03 crc kubenswrapper[4820]: I0221 07:00:03.647500 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54597218-e332-4423-adc0-b4be2977a4ce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" event={"ID":"54597218-e332-4423-adc0-b4be2977a4ce","Type":"ContainerDied","Data":"7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9"} Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153665 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ccb949d706acc9f1588c7ac95f689c87c2049502040f21e82a12613a5dc82d9" Feb 21 07:00:04 crc kubenswrapper[4820]: I0221 07:00:04.153400 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn" Feb 21 07:00:05 crc kubenswrapper[4820]: I0221 07:00:05.821080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c68698666-cvwrd" Feb 21 07:00:13 crc kubenswrapper[4820]: I0221 07:00:13.816436 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:00:13 crc kubenswrapper[4820]: I0221 07:00:13.816696 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:00:25 crc kubenswrapper[4820]: I0221 07:00:25.607840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9bd6bbfc6-srwvl" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.260846 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.261091 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261106 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261264 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="54597218-e332-4423-adc0-b4be2977a4ce" containerName="collect-profiles" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.261858 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.265163 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.265502 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8ljn7" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.280766 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dr9qm"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.284224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.286606 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.289950 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.290986 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.340919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341220 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.341324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.386511 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cwv62"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.387293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.390301 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.390485 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.391517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.391790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cx7cq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.411606 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.412508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.414522 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.431136 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.442642 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443017 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-conf\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-metrics\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.443655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-reloader\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.444382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2827f692-18f9-4d32-b7bd-636d595a008f-frr-startup\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.444558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2827f692-18f9-4d32-b7bd-636d595a008f-frr-sockets\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.445257 4820 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.445306 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert podName:a5c8b64a-a6da-435e-a87d-bd397ad045a4 nodeName:}" failed. No retries permitted until 2026-02-21 07:00:26.945289987 +0000 UTC m=+801.978374185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert") pod "frr-k8s-webhook-server-78b44bf5bb-jw8nq" (UID: "a5c8b64a-a6da-435e-a87d-bd397ad045a4") : secret "frr-k8s-webhook-server-cert" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.452218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2827f692-18f9-4d32-b7bd-636d595a008f-metrics-certs\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.482575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p84mk\" (UniqueName: \"kubernetes.io/projected/a5c8b64a-a6da-435e-a87d-bd397ad045a4-kube-api-access-p84mk\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.484896 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4zl\" (UniqueName: \"kubernetes.io/projected/2827f692-18f9-4d32-b7bd-636d595a008f-kube-api-access-wn4zl\") pod \"frr-k8s-dr9qm\" (UID: \"2827f692-18f9-4d32-b7bd-636d595a008f\") " pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.545620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.634606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.647068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.647167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metallb-excludel2\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.648643 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: E0221 07:00:26.648709 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist podName:cc577a47-69e2-4ae2-93c1-e922f0c6e3d8 nodeName:}" failed. No retries permitted until 2026-02-21 07:00:27.148694348 +0000 UTC m=+802.181778556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist") pod "speaker-cwv62" (UID: "cc577a47-69e2-4ae2-93c1-e922f0c6e3d8") : secret "metallb-memberlist" not found Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648783 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.648821 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.652260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-metrics-certs\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.652393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-metrics-certs\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.668273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9x2j\" (UniqueName: \"kubernetes.io/projected/6f342ec6-aed8-48ff-a1ba-9d6634bda927-kube-api-access-t9x2j\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.670856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f342ec6-aed8-48ff-a1ba-9d6634bda927-cert\") pod \"controller-69bbfbf88f-jrcl5\" (UID: \"6f342ec6-aed8-48ff-a1ba-9d6634bda927\") " pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.672863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtz5\" (UniqueName: \"kubernetes.io/projected/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-kube-api-access-zqtz5\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.725016 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.898421 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jrcl5"] Feb 21 07:00:26 crc kubenswrapper[4820]: W0221 07:00:26.903696 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f342ec6_aed8_48ff_a1ba_9d6634bda927.slice/crio-403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1 WatchSource:0}: Error finding container 403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1: Status 404 returned error can't find the container with id 403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1 Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.952669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:26 crc kubenswrapper[4820]: I0221 07:00:26.957749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5c8b64a-a6da-435e-a87d-bd397ad045a4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-jw8nq\" (UID: \"a5c8b64a-a6da-435e-a87d-bd397ad045a4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.154860 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.158602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc577a47-69e2-4ae2-93c1-e922f0c6e3d8-memberlist\") pod \"speaker-cwv62\" (UID: \"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8\") " pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.180742 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281058 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"62242f816db7627dd228ae881131350ead84f4174402a57ead25b88741c64144"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"fd5a6053ed0b3d06cf7020e7625083fa1095dfed9938ffe55082a7de5922f93d"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jrcl5" event={"ID":"6f342ec6-aed8-48ff-a1ba-9d6634bda927","Type":"ContainerStarted","Data":"403a0145606b66351e07610573ec9cd781c13f8a21d583a4b1f94dfb735b5ec1"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.281985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.286044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"95824013e35b2f466e23d5c8960a4feaac2ed23ab70a402f931b714e0782add1"} Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.298675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwv62" Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.301325 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jrcl5" podStartSLOduration=1.3013063329999999 podStartE2EDuration="1.301306333s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:27.297442377 +0000 UTC m=+802.330526575" watchObservedRunningTime="2026-02-21 07:00:27.301306333 +0000 UTC m=+802.334390531" Feb 21 07:00:27 crc kubenswrapper[4820]: W0221 07:00:27.359018 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc577a47_69e2_4ae2_93c1_e922f0c6e3d8.slice/crio-81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181 WatchSource:0}: Error finding container 81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181: Status 404 returned error can't find the container with id 81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181 Feb 21 07:00:27 crc kubenswrapper[4820]: I0221 07:00:27.410749 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq"] Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.295621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"ea5a189303a35e30ead1f2f1e318066a9153121ea06eccd99be664bf632098d3"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.295999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"e365d5006e67a435f8e3bff1160849e85cf1eb8d9a8a4cd40ceb72f6d040e2ac"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.296013 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwv62" event={"ID":"cc577a47-69e2-4ae2-93c1-e922f0c6e3d8","Type":"ContainerStarted","Data":"81e4ae818232e21609ecc99c6e098bf55a0f004506e201635f47297aea6ff181"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.296319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cwv62" Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.297556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" event={"ID":"a5c8b64a-a6da-435e-a87d-bd397ad045a4","Type":"ContainerStarted","Data":"ff4a9ddfbbd46a0e40d00a33931ffc28ec5efedc97e4745ca4d93c2743a157fa"} Feb 21 07:00:28 crc kubenswrapper[4820]: I0221 07:00:28.319391 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cwv62" podStartSLOduration=2.319371467 podStartE2EDuration="2.319371467s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:00:28.313924618 +0000 UTC m=+803.347008826" watchObservedRunningTime="2026-02-21 07:00:28.319371467 +0000 UTC m=+803.352455675" Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.354775 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="cbc113af5c2a6bdea91d6463d7f779cb6340f744b430c5989069c49cee009dbe" exitCode=0 Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.354909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"cbc113af5c2a6bdea91d6463d7f779cb6340f744b430c5989069c49cee009dbe"} Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.358159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" event={"ID":"a5c8b64a-a6da-435e-a87d-bd397ad045a4","Type":"ContainerStarted","Data":"e447d16c88f1358b67cafc3ddcdeec7050e903c4de4b4ea3928cb89c1ceff4f8"} Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.358329 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:34 crc kubenswrapper[4820]: I0221 07:00:34.412687 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" podStartSLOduration=2.396757596 podStartE2EDuration="8.412650219s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="2026-02-21 07:00:27.409435644 +0000 UTC m=+802.442519842" lastFinishedPulling="2026-02-21 07:00:33.425328257 +0000 UTC m=+808.458412465" observedRunningTime="2026-02-21 07:00:34.400808665 +0000 UTC m=+809.433892883" watchObservedRunningTime="2026-02-21 07:00:34.412650219 +0000 UTC m=+809.445734457" Feb 21 07:00:35 crc kubenswrapper[4820]: I0221 07:00:35.368043 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="ada97ed03b106b3662de92fc179820a1b2bfc50befca899ecd5e19d02ad05eba" exitCode=0 Feb 21 07:00:35 crc kubenswrapper[4820]: I0221 07:00:35.368104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"ada97ed03b106b3662de92fc179820a1b2bfc50befca899ecd5e19d02ad05eba"} Feb 21 07:00:36 crc kubenswrapper[4820]: I0221 07:00:36.377150 4820 generic.go:334] "Generic (PLEG): container finished" podID="2827f692-18f9-4d32-b7bd-636d595a008f" containerID="57e6e5fddc66aad5c01dd29af6184c28a5f49fadf6eac0beced3eb6d80e678e7" exitCode=0 Feb 21 07:00:36 crc kubenswrapper[4820]: I0221 07:00:36.377196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerDied","Data":"57e6e5fddc66aad5c01dd29af6184c28a5f49fadf6eac0beced3eb6d80e678e7"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.302193 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cwv62" Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"3597e26d481415fd9774aaee5b50fbf16caf0b85a46d877281ea04cc0f723f6c"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"bd1b6f9de77e4cf318538404d88d743002bcf5227f3fed5cbf04591d64e8575a"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"78d0050cea8ddaecf61b0ec375254ab380c0ed9e64feef093223b8e7af31e624"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388694 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"1e0ccc996b64955392d8e84e98400d4e7cbdea6037b12f29e3df960c82e93ffc"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"f5d43d720b554891a559e1ae4e21b5093a52d4911b630956d717d3897200be4e"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dr9qm" event={"ID":"2827f692-18f9-4d32-b7bd-636d595a008f","Type":"ContainerStarted","Data":"13e5076f7983f2e11d2466556088039a81801da970e145d79d3e7cfda2f20cd1"} Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.388807 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:37 crc kubenswrapper[4820]: I0221 07:00:37.413913 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dr9qm" podStartSLOduration=4.778632805 podStartE2EDuration="11.413890862s" podCreationTimestamp="2026-02-21 07:00:26 +0000 UTC" firstStartedPulling="2026-02-21 07:00:26.776813547 +0000 UTC m=+801.809897745" lastFinishedPulling="2026-02-21 07:00:33.412071574 +0000 UTC m=+808.445155802" observedRunningTime="2026-02-21 07:00:37.410254352 +0000 UTC m=+812.443338570" watchObservedRunningTime="2026-02-21 07:00:37.413890862 +0000 UTC m=+812.446975060" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.663446 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.665326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.668300 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.675012 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712898 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.712992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.813547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.814056 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.814101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:38 crc kubenswrapper[4820]: I0221 07:00:38.832627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.012775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.209569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7"] Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.403745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerStarted","Data":"9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47"} Feb 21 07:00:39 crc kubenswrapper[4820]: I0221 07:00:39.403787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerStarted","Data":"0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515"} Feb 21 07:00:40 crc kubenswrapper[4820]: I0221 07:00:40.410852 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47" exitCode=0 Feb 21 07:00:40 crc kubenswrapper[4820]: I0221 07:00:40.410891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"9d75a6abbe2c6d4a98aa091f1cede7bbf555078b65f976d441dd58e409620f47"} Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.023966 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.026079 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.041365 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.054165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.155817 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.156641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.176151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"redhat-operators-pcnnk\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.363761 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.635597 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.681751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:41 crc kubenswrapper[4820]: I0221 07:00:41.800104 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:00:41 crc kubenswrapper[4820]: W0221 07:00:41.805554 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23588b8_ba46_4a3b_8f44_22c46230f838.slice/crio-9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c WatchSource:0}: Error finding container 9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c: Status 404 returned error can't find the container with id 9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424373 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" exitCode=0 Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c"} Feb 21 07:00:42 crc kubenswrapper[4820]: I0221 07:00:42.424739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c"} Feb 21 07:00:43 crc kubenswrapper[4820]: I0221 07:00:43.816329 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:00:43 crc kubenswrapper[4820]: I0221 07:00:43.816392 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.436829 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="d6e1b1acd64b121b18426115b034bc07a2c112c32661f175e5cb3efb706dfb9c" exitCode=0 Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.436908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"d6e1b1acd64b121b18426115b034bc07a2c112c32661f175e5cb3efb706dfb9c"} Feb 21 07:00:44 crc kubenswrapper[4820]: I0221 07:00:44.446743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} Feb 21 07:00:44 crc kubenswrapper[4820]: E0221 07:00:44.873764 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23588b8_ba46_4a3b_8f44_22c46230f838.slice/crio-conmon-59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac.scope\": RecentStats: unable to find data in memory cache]" Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.456355 4820 generic.go:334] "Generic (PLEG): container finished" podID="a332a364-5157-4e4a-8313-7b267a41ac97" containerID="27047234a26ed8c4476270f8c583712d5171722ca88352bd3ab0081bcd984f39" exitCode=0 Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.456464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"27047234a26ed8c4476270f8c583712d5171722ca88352bd3ab0081bcd984f39"} Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.458902 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" exitCode=0 Feb 21 07:00:45 crc kubenswrapper[4820]: I0221 07:00:45.458955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.482750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerStarted","Data":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.502701 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pcnnk" podStartSLOduration=1.973792804 podStartE2EDuration="5.50268306s" podCreationTimestamp="2026-02-21 07:00:41 +0000 UTC" firstStartedPulling="2026-02-21 07:00:42.426011301 +0000 UTC m=+817.459095499" lastFinishedPulling="2026-02-21 07:00:45.954901557 +0000 UTC m=+820.987985755" observedRunningTime="2026-02-21 07:00:46.500640733 +0000 UTC m=+821.533724941" watchObservedRunningTime="2026-02-21 07:00:46.50268306 +0000 UTC m=+821.535767258" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.641454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dr9qm" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.729395 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jrcl5" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.753016 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860482 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.860562 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") pod \"a332a364-5157-4e4a-8313-7b267a41ac97\" (UID: \"a332a364-5157-4e4a-8313-7b267a41ac97\") " Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.861750 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle" (OuterVolumeSpecName: "bundle") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.866665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx" (OuterVolumeSpecName: "kube-api-access-6hxqx") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "kube-api-access-6hxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.870163 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util" (OuterVolumeSpecName: "util") pod "a332a364-5157-4e4a-8313-7b267a41ac97" (UID: "a332a364-5157-4e4a-8313-7b267a41ac97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961772 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hxqx\" (UniqueName: \"kubernetes.io/projected/a332a364-5157-4e4a-8313-7b267a41ac97-kube-api-access-6hxqx\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961804 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-util\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:46 crc kubenswrapper[4820]: I0221 07:00:46.961814 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a332a364-5157-4e4a-8313-7b267a41ac97-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.187390 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-jw8nq" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489880 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7" event={"ID":"a332a364-5157-4e4a-8313-7b267a41ac97","Type":"ContainerDied","Data":"0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515"} Feb 21 07:00:47 crc kubenswrapper[4820]: I0221 07:00:47.489925 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6e07eeb7f809ed6f9e19a012c146b3f11c5eab6437acfc6fbff21d36902515" Feb 21 07:00:51 crc kubenswrapper[4820]: I0221 07:00:51.364627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:51 crc kubenswrapper[4820]: I0221 07:00:51.364889 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.270251 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="util" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271122 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="util" Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271207 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: E0221 07:00:52.271481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="pull" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271565 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="pull" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.271898 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a332a364-5157-4e4a-8313-7b267a41ac97" containerName="extract" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.272720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276762 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276830 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-sw5zc" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.276842 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.314078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.326399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.326470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.423679 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pcnnk" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" probeResult="failure" output=< Feb 21 07:00:52 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:00:52 crc kubenswrapper[4820]: > Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.427899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.443953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94w6\" (UniqueName: \"kubernetes.io/projected/4ae6b64f-6c78-415f-b36e-e9cf9ec722dd-kube-api-access-d94w6\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lkl7f\" (UID: \"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:52 crc kubenswrapper[4820]: I0221 07:00:52.589143 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" Feb 21 07:00:53 crc kubenswrapper[4820]: I0221 07:00:53.058176 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f"] Feb 21 07:00:53 crc kubenswrapper[4820]: W0221 07:00:53.060172 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae6b64f_6c78_415f_b36e_e9cf9ec722dd.slice/crio-4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc WatchSource:0}: Error finding container 4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc: Status 404 returned error can't find the container with id 4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc Feb 21 07:00:53 crc kubenswrapper[4820]: I0221 07:00:53.525345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" event={"ID":"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd","Type":"ContainerStarted","Data":"4b5ef0f5bb12719ff440ef303f0804c0a1b4b503df71bc552f8ddc13d8f4a8cc"} Feb 21 07:00:56 crc kubenswrapper[4820]: I0221 07:00:56.548451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" event={"ID":"4ae6b64f-6c78-415f-b36e-e9cf9ec722dd","Type":"ContainerStarted","Data":"9f73b6d49d299652d52c831672834a19d4f38ca1dee8dfc350ec43f28812821f"} Feb 21 07:00:56 crc kubenswrapper[4820]: I0221 07:00:56.568212 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lkl7f" podStartSLOduration=1.285433727 podStartE2EDuration="4.568197981s" podCreationTimestamp="2026-02-21 07:00:52 +0000 UTC" firstStartedPulling="2026-02-21 07:00:53.062664505 +0000 UTC m=+828.095748723" lastFinishedPulling="2026-02-21 07:00:56.345428779 +0000 UTC m=+831.378512977" observedRunningTime="2026-02-21 07:00:56.56745323 +0000 UTC m=+831.600537428" watchObservedRunningTime="2026-02-21 07:00:56.568197981 +0000 UTC m=+831.601282179" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.900264 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.902960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912026 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.912890 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8rglc" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.923047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.935487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:00 crc kubenswrapper[4820]: I0221 07:01:00.935794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.036690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.037268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.062151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.067364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fgs\" (UniqueName: \"kubernetes.io/projected/e88f2404-d287-429a-a995-ea8be7fa5be8-kube-api-access-55fgs\") pod \"cert-manager-webhook-6888856db4-r5ddv\" (UID: \"e88f2404-d287-429a-a995-ea8be7fa5be8\") " pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.224464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.374196 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.374879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.381272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jskkw" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.388603 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.426580 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.441903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.442052 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.464400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.543532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.543616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.559891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.560873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgk8k\" (UniqueName: \"kubernetes.io/projected/3a53f347-c86d-4ef3-82c2-29549135afe6-kube-api-access-fgk8k\") pod \"cert-manager-cainjector-5545bd876-9lkfz\" (UID: \"3a53f347-c86d-4ef3-82c2-29549135afe6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.637289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-r5ddv"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.706161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.810074 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:01 crc kubenswrapper[4820]: I0221 07:01:01.890803 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-9lkfz"] Feb 21 07:01:01 crc kubenswrapper[4820]: W0221 07:01:01.899668 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a53f347_c86d_4ef3_82c2_29549135afe6.slice/crio-1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb WatchSource:0}: Error finding container 1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb: Status 404 returned error can't find the container with id 1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.579158 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" event={"ID":"e88f2404-d287-429a-a995-ea8be7fa5be8","Type":"ContainerStarted","Data":"816b6cef803ab810406e3a0facf81dc80e95f6624d7321feba9ece3aa038238b"} Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.580437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" event={"ID":"3a53f347-c86d-4ef3-82c2-29549135afe6","Type":"ContainerStarted","Data":"1adf2b48cc4c435f04a3b81d2b2e7ccb1783451726bb0b159954f8949b992bfb"} Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.580594 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pcnnk" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" containerID="cri-o://2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" gracePeriod=2 Feb 21 07:01:02 crc kubenswrapper[4820]: I0221 07:01:02.938196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061512 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.061611 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") pod \"b23588b8-ba46-4a3b-8f44-22c46230f838\" (UID: \"b23588b8-ba46-4a3b-8f44-22c46230f838\") " Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.062873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities" (OuterVolumeSpecName: "utilities") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.067130 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf" (OuterVolumeSpecName: "kube-api-access-2dzwf") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "kube-api-access-2dzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.163223 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzwf\" (UniqueName: \"kubernetes.io/projected/b23588b8-ba46-4a3b-8f44-22c46230f838-kube-api-access-2dzwf\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.163273 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.190644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23588b8-ba46-4a3b-8f44-22c46230f838" (UID: "b23588b8-ba46-4a3b-8f44-22c46230f838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.264972 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23588b8-ba46-4a3b-8f44-22c46230f838-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591054 4820 generic.go:334] "Generic (PLEG): container finished" podID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" exitCode=0 Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591184 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pcnnk" event={"ID":"b23588b8-ba46-4a3b-8f44-22c46230f838","Type":"ContainerDied","Data":"9fac7c444fcd12fd7457c0cac7f26e876fef1e07d2e9d09e4ec96630d3fa592c"} Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591209 4820 scope.go:117] "RemoveContainer" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.591406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pcnnk" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.618303 4820 scope.go:117] "RemoveContainer" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.622737 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.626905 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pcnnk"] Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.667779 4820 scope.go:117] "RemoveContainer" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682475 4820 scope.go:117] "RemoveContainer" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.682914 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": container with ID starting with 2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed not found: ID does not exist" containerID="2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682950 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed"} err="failed to get container status \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": rpc error: code = NotFound desc = could not find container \"2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed\": container with ID starting with 2fa2214f789b70023ba04a8c7fdd72e76c2f54e9a9911cf21ff7bff30b1d96ed not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.682974 4820 scope.go:117] "RemoveContainer" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.683408 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": container with ID starting with 59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac not found: ID does not exist" containerID="59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683433 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac"} err="failed to get container status \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": rpc error: code = NotFound desc = could not find container \"59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac\": container with ID starting with 59ed6bd7bbb062c8c5a1e20654b1c28f4472226f5f08233a7afea7733380fcac not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683445 4820 scope.go:117] "RemoveContainer" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: E0221 07:01:03.683668 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": container with ID starting with b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c not found: ID does not exist" containerID="b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.683693 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c"} err="failed to get container status \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": rpc error: code = NotFound desc = could not find container \"b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c\": container with ID starting with b1e306035eab972aecbcda742c22dcb82fa07c0842c04bf35eda27de693c8e9c not found: ID does not exist" Feb 21 07:01:03 crc kubenswrapper[4820]: I0221 07:01:03.707936 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" path="/var/lib/kubelet/pods/b23588b8-ba46-4a3b-8f44-22c46230f838/volumes" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.610497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" event={"ID":"3a53f347-c86d-4ef3-82c2-29549135afe6","Type":"ContainerStarted","Data":"5c89fe3bf3a643bccbbb04a348389c5348d44f109ff4137b8920630f8b4d6dab"} Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.612214 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" event={"ID":"e88f2404-d287-429a-a995-ea8be7fa5be8","Type":"ContainerStarted","Data":"7c96626874b9236e7e013698d8108ac3faf0bc4b66a40d0e23399c8a5692a8dd"} Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.612394 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.624552 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-9lkfz" podStartSLOduration=1.456616681 podStartE2EDuration="5.624535379s" podCreationTimestamp="2026-02-21 07:01:01 +0000 UTC" firstStartedPulling="2026-02-21 07:01:01.902871235 +0000 UTC m=+836.935955433" lastFinishedPulling="2026-02-21 07:01:06.070789913 +0000 UTC m=+841.103874131" observedRunningTime="2026-02-21 07:01:06.623512671 +0000 UTC m=+841.656596869" watchObservedRunningTime="2026-02-21 07:01:06.624535379 +0000 UTC m=+841.657619577" Feb 21 07:01:06 crc kubenswrapper[4820]: I0221 07:01:06.641616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" podStartSLOduration=2.243895405 podStartE2EDuration="6.641596016s" podCreationTimestamp="2026-02-21 07:01:00 +0000 UTC" firstStartedPulling="2026-02-21 07:01:01.646086452 +0000 UTC m=+836.679170650" lastFinishedPulling="2026-02-21 07:01:06.043787063 +0000 UTC m=+841.076871261" observedRunningTime="2026-02-21 07:01:06.638175123 +0000 UTC m=+841.671259311" watchObservedRunningTime="2026-02-21 07:01:06.641596016 +0000 UTC m=+841.674680214" Feb 21 07:01:11 crc kubenswrapper[4820]: I0221 07:01:11.227405 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-r5ddv" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.815979 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.816107 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.816200 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.817272 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:01:13 crc kubenswrapper[4820]: I0221 07:01:13.817367 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" gracePeriod=600 Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660023 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" exitCode=0 Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19"} Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} Feb 21 07:01:14 crc kubenswrapper[4820]: I0221 07:01:14.660726 4820 scope.go:117] "RemoveContainer" containerID="3fc9b08aad2edad9a74ca93f30446b530336f95338e8d6ab6b9d614b704623df" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.838255 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839741 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-content" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-content" Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839787 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839799 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: E0221 07:01:19.839818 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-utilities" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.839830 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="extract-utilities" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.840066 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23588b8-ba46-4a3b-8f44-22c46230f838" containerName="registry-server" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.840960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.846339 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cpqld" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.848544 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.972894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:19 crc kubenswrapper[4820]: I0221 07:01:19.972974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.074893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.075091 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.093577 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-bound-sa-token\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.094160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrw57\" (UniqueName: \"kubernetes.io/projected/d35515e4-d029-4f6a-be2a-d7ea32ab06ad-kube-api-access-mrw57\") pod \"cert-manager-545d4d4674-mf4f6\" (UID: \"d35515e4-d029-4f6a-be2a-d7ea32ab06ad\") " pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.171005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mf4f6" Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.581538 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mf4f6"] Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.703069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mf4f6" event={"ID":"d35515e4-d029-4f6a-be2a-d7ea32ab06ad","Type":"ContainerStarted","Data":"77ff3c1f0a5372fa4f674e5a86a0a60ed238ef00f14e5b2435672fbd5710a013"} Feb 21 07:01:20 crc kubenswrapper[4820]: I0221 07:01:20.703124 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mf4f6" event={"ID":"d35515e4-d029-4f6a-be2a-d7ea32ab06ad","Type":"ContainerStarted","Data":"dee5a75bbc13d8d2435b00414c5efd00a0442515f8b7a6d872031937e4ee7953"} Feb 21 07:01:21 crc kubenswrapper[4820]: I0221 07:01:21.724834 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-mf4f6" podStartSLOduration=2.724819222 podStartE2EDuration="2.724819222s" podCreationTimestamp="2026-02-21 07:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:01:21.722363515 +0000 UTC m=+856.755447713" watchObservedRunningTime="2026-02-21 07:01:21.724819222 +0000 UTC m=+856.757903410" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.020362 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.021635 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8g7ps" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024125 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.024775 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.030119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.109968 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.210798 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.227860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"openstack-operator-index-zxv8h\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.343677 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:25 crc kubenswrapper[4820]: I0221 07:01:25.850439 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:25 crc kubenswrapper[4820]: W0221 07:01:25.857005 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f586a4e_5100_444f_8e11_0d6f785eac00.slice/crio-42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f WatchSource:0}: Error finding container 42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f: Status 404 returned error can't find the container with id 42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f Feb 21 07:01:26 crc kubenswrapper[4820]: I0221 07:01:26.736417 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerStarted","Data":"42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f"} Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.403021 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.748478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerStarted","Data":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} Feb 21 07:01:28 crc kubenswrapper[4820]: I0221 07:01:28.769157 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zxv8h" podStartSLOduration=1.7785773649999999 podStartE2EDuration="3.769132118s" podCreationTimestamp="2026-02-21 07:01:25 +0000 UTC" firstStartedPulling="2026-02-21 07:01:25.859606602 +0000 UTC m=+860.892690800" lastFinishedPulling="2026-02-21 07:01:27.850161315 +0000 UTC m=+862.883245553" observedRunningTime="2026-02-21 07:01:28.762000493 +0000 UTC m=+863.795084691" watchObservedRunningTime="2026-02-21 07:01:28.769132118 +0000 UTC m=+863.802216316" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.014445 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.015672 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.043099 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.174419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.275484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.302924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk8dh\" (UniqueName: \"kubernetes.io/projected/2af934a2-6680-4932-b3af-5f8bdee6c740-kube-api-access-vk8dh\") pod \"openstack-operator-index-tt62z\" (UID: \"2af934a2-6680-4932-b3af-5f8bdee6c740\") " pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.389577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.757684 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zxv8h" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" containerID="cri-o://f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" gracePeriod=2 Feb 21 07:01:29 crc kubenswrapper[4820]: I0221 07:01:29.776769 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tt62z"] Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.052499 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.188347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") pod \"6f586a4e-5100-444f-8e11-0d6f785eac00\" (UID: \"6f586a4e-5100-444f-8e11-0d6f785eac00\") " Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.198068 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7" (OuterVolumeSpecName: "kube-api-access-sv2l7") pod "6f586a4e-5100-444f-8e11-0d6f785eac00" (UID: "6f586a4e-5100-444f-8e11-0d6f785eac00"). InnerVolumeSpecName "kube-api-access-sv2l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.289706 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv2l7\" (UniqueName: \"kubernetes.io/projected/6f586a4e-5100-444f-8e11-0d6f785eac00-kube-api-access-sv2l7\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.765994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tt62z" event={"ID":"2af934a2-6680-4932-b3af-5f8bdee6c740","Type":"ContainerStarted","Data":"f437daf445daf36ba8d71886d37b62c64fda8f52e87ded71a0b8ee7c5686ef45"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.766343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tt62z" event={"ID":"2af934a2-6680-4932-b3af-5f8bdee6c740","Type":"ContainerStarted","Data":"893ba68971cfe0be4a5528783af19d8cc6a75cc64abc3578fb0b4fc385d62e00"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768327 4820 generic.go:334] "Generic (PLEG): container finished" podID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" exitCode=0 Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerDied","Data":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zxv8h" event={"ID":"6f586a4e-5100-444f-8e11-0d6f785eac00","Type":"ContainerDied","Data":"42a38d966e205222202a7707ec4ffc33aa4152d744c2acd52a8c24e5ab40141f"} Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768404 4820 scope.go:117] "RemoveContainer" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.768547 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zxv8h" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.785926 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tt62z" podStartSLOduration=2.4085479850000002 podStartE2EDuration="2.785899369s" podCreationTimestamp="2026-02-21 07:01:28 +0000 UTC" firstStartedPulling="2026-02-21 07:01:29.779517733 +0000 UTC m=+864.812601931" lastFinishedPulling="2026-02-21 07:01:30.156869107 +0000 UTC m=+865.189953315" observedRunningTime="2026-02-21 07:01:30.783715619 +0000 UTC m=+865.816799827" watchObservedRunningTime="2026-02-21 07:01:30.785899369 +0000 UTC m=+865.818983577" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.790797 4820 scope.go:117] "RemoveContainer" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: E0221 07:01:30.791569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": container with ID starting with f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329 not found: ID does not exist" containerID="f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.791618 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329"} err="failed to get container status \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": rpc error: code = NotFound desc = could not find container \"f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329\": container with ID starting with f4a074c337dfb5afa92880ca93976dfda91693e08404135b8664ce662e0c9329 not found: ID does not exist" Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.809738 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:30 crc kubenswrapper[4820]: I0221 07:01:30.813591 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zxv8h"] Feb 21 07:01:31 crc kubenswrapper[4820]: I0221 07:01:31.705530 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" path="/var/lib/kubelet/pods/6f586a4e-5100-444f-8e11-0d6f785eac00/volumes" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.214639 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:34 crc kubenswrapper[4820]: E0221 07:01:34.215130 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.215144 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.215283 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f586a4e-5100-444f-8e11-0d6f785eac00" containerName="registry-server" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.216012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.227996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.243735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.344899 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.345619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.372635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"redhat-marketplace-nspq6\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:34 crc kubenswrapper[4820]: I0221 07:01:34.541277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.033137 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:35 crc kubenswrapper[4820]: W0221 07:01:35.037431 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf0f82c_d0f3_4315_88f5_73b4a99cf1d0.slice/crio-2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b WatchSource:0}: Error finding container 2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b: Status 404 returned error can't find the container with id 2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.801763 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" exitCode=0 Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.801890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07"} Feb 21 07:01:35 crc kubenswrapper[4820]: I0221 07:01:35.802133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerStarted","Data":"2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b"} Feb 21 07:01:36 crc kubenswrapper[4820]: I0221 07:01:36.809642 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" exitCode=0 Feb 21 07:01:36 crc kubenswrapper[4820]: I0221 07:01:36.809696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc"} Feb 21 07:01:37 crc kubenswrapper[4820]: I0221 07:01:37.819645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerStarted","Data":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} Feb 21 07:01:37 crc kubenswrapper[4820]: I0221 07:01:37.841470 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nspq6" podStartSLOduration=2.4362492270000002 podStartE2EDuration="3.841437733s" podCreationTimestamp="2026-02-21 07:01:34 +0000 UTC" firstStartedPulling="2026-02-21 07:01:35.802981589 +0000 UTC m=+870.836065787" lastFinishedPulling="2026-02-21 07:01:37.208170095 +0000 UTC m=+872.241254293" observedRunningTime="2026-02-21 07:01:37.835202451 +0000 UTC m=+872.868286649" watchObservedRunningTime="2026-02-21 07:01:37.841437733 +0000 UTC m=+872.874521971" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.390593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.392098 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.416721 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:39 crc kubenswrapper[4820]: I0221 07:01:39.862942 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tt62z" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.541751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.542294 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.580214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:44 crc kubenswrapper[4820]: I0221 07:01:44.907941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:45 crc kubenswrapper[4820]: I0221 07:01:45.803786 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:46 crc kubenswrapper[4820]: I0221 07:01:46.875822 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nspq6" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" containerID="cri-o://66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" gracePeriod=2 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.054775 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.056220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.060844 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sqz9z" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.064021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.217380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.263568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.318601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.319103 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.320025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.334333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.377898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420201 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.420261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") pod \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\" (UID: \"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0\") " Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.421509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities" (OuterVolumeSpecName: "utilities") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.429887 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd" (OuterVolumeSpecName: "kube-api-access-dq2jd") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "kube-api-access-dq2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.463622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" (UID: "6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521878 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521913 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.521924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2jd\" (UniqueName: \"kubernetes.io/projected/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0-kube-api-access-dq2jd\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.578087 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff"] Feb 21 07:01:47 crc kubenswrapper[4820]: W0221 07:01:47.582601 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47790790_d956_41e0_8868_9fb9fecfefe7.slice/crio-2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd WatchSource:0}: Error finding container 2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd: Status 404 returned error can't find the container with id 2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.882839 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="01038c847cf937170245344a33fbdc7f19bf2b88135e96ab6708ce05e2281dbb" exitCode=0 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.882938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"01038c847cf937170245344a33fbdc7f19bf2b88135e96ab6708ce05e2281dbb"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.883116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerStarted","Data":"2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885427 4820 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" exitCode=0 Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885519 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nspq6" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nspq6" event={"ID":"6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0","Type":"ContainerDied","Data":"2c780379966ed60d8da8ce6d2baf9f907b06b0c2d6eeaaf960faf7ec025b1c3b"} Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.885554 4820 scope.go:117] "RemoveContainer" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.899510 4820 scope.go:117] "RemoveContainer" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.917116 4820 scope.go:117] "RemoveContainer" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.923189 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931347 4820 scope.go:117] "RemoveContainer" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.931721 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": container with ID starting with 66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a not found: ID does not exist" containerID="66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931777 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a"} err="failed to get container status \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": rpc error: code = NotFound desc = could not find container \"66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a\": container with ID starting with 66062406db60b67786903317cb117584ccde4a2fe95b31d0fc26ca3eedcd090a not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.931828 4820 scope.go:117] "RemoveContainer" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.932217 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": container with ID starting with 537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc not found: ID does not exist" containerID="537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932260 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc"} err="failed to get container status \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": rpc error: code = NotFound desc = could not find container \"537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc\": container with ID starting with 537edaca4c4ea8d524f52e9a5182bd0761f12a7351782787f06a38fa09f5a5dc not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932288 4820 scope.go:117] "RemoveContainer" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: E0221 07:01:47.932834 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": container with ID starting with bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07 not found: ID does not exist" containerID="bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.932879 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07"} err="failed to get container status \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": rpc error: code = NotFound desc = could not find container \"bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07\": container with ID starting with bff84bab2994c7c04e4b0f00b544596dba4f7c5dc1877bffaf60e930d2b0fd07 not found: ID does not exist" Feb 21 07:01:47 crc kubenswrapper[4820]: I0221 07:01:47.936799 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nspq6"] Feb 21 07:01:48 crc kubenswrapper[4820]: I0221 07:01:48.892502 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="f2b97b10fc8f4e954b58ebb3cf8ae33b16d546186f43708f48e7e2637503ba8d" exitCode=0 Feb 21 07:01:48 crc kubenswrapper[4820]: I0221 07:01:48.892595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"f2b97b10fc8f4e954b58ebb3cf8ae33b16d546186f43708f48e7e2637503ba8d"} Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.704982 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" path="/var/lib/kubelet/pods/6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0/volumes" Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.905618 4820 generic.go:334] "Generic (PLEG): container finished" podID="47790790-d956-41e0-8868-9fb9fecfefe7" containerID="bc6abeea8560611797e5ebf5c141aebde43471dd6a3e04302b8f6a5d7eab4b7e" exitCode=0 Feb 21 07:01:49 crc kubenswrapper[4820]: I0221 07:01:49.905986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"bc6abeea8560611797e5ebf5c141aebde43471dd6a3e04302b8f6a5d7eab4b7e"} Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.185444 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269194 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.269305 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") pod \"47790790-d956-41e0-8868-9fb9fecfefe7\" (UID: \"47790790-d956-41e0-8868-9fb9fecfefe7\") " Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.270086 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle" (OuterVolumeSpecName: "bundle") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.278079 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn" (OuterVolumeSpecName: "kube-api-access-jjkhn") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "kube-api-access-jjkhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.290081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util" (OuterVolumeSpecName: "util") pod "47790790-d956-41e0-8868-9fb9fecfefe7" (UID: "47790790-d956-41e0-8868-9fb9fecfefe7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370878 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjkhn\" (UniqueName: \"kubernetes.io/projected/47790790-d956-41e0-8868-9fb9fecfefe7-kube-api-access-jjkhn\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370949 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-util\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.370968 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47790790-d956-41e0-8868-9fb9fecfefe7-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" event={"ID":"47790790-d956-41e0-8868-9fb9fecfefe7","Type":"ContainerDied","Data":"2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd"} Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925896 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2e62b3dd9057de4fc4fef98d4cbc804e38d9b07f322e01a3120a3d2b8997cd" Feb 21 07:01:51 crc kubenswrapper[4820]: I0221 07:01:51.925617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.206963 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207188 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207198 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207211 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="util" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207217 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="util" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207229 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207255 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207263 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-content" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207268 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-content" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="pull" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="pull" Feb 21 07:01:55 crc kubenswrapper[4820]: E0221 07:01:55.207292 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-utilities" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207298 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="extract-utilities" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207408 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf0f82c-d0f3-4315-88f5-73b4a99cf1d0" containerName="registry-server" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.207427 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="47790790-d956-41e0-8868-9fb9fecfefe7" containerName="extract" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.208693 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.218201 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.325289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.426814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.427722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.444848 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"community-operators-llbg7\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.523461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:01:55 crc kubenswrapper[4820]: I0221 07:01:55.961996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.958806 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" exitCode=0 Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.958973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f"} Feb 21 07:01:56 crc kubenswrapper[4820]: I0221 07:01:56.960189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerStarted","Data":"952313b28cd6f53641c8f8ee679f85d42b6fa78ed8b197b1cdd78e7ea0cde5a8"} Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.459493 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.460335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.462305 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bc4xv" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.497578 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.554278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.655674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.674754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlq7\" (UniqueName: \"kubernetes.io/projected/b7cb4a9f-82fd-41b1-8175-351de45fde99-kube-api-access-rxlq7\") pod \"openstack-operator-controller-init-6679bf9b57-l85mk\" (UID: \"b7cb4a9f-82fd-41b1-8175-351de45fde99\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.778265 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.971108 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" exitCode=0 Feb 21 07:01:57 crc kubenswrapper[4820]: I0221 07:01:57.973285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.186600 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk"] Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.980520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerStarted","Data":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.991642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" event={"ID":"b7cb4a9f-82fd-41b1-8175-351de45fde99","Type":"ContainerStarted","Data":"d6e5a2caaaec05c5508bf1e4085d61c0aa16ce4cbb0d788cbef3000cfbb5913a"} Feb 21 07:01:58 crc kubenswrapper[4820]: I0221 07:01:58.999091 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llbg7" podStartSLOduration=2.575517157 podStartE2EDuration="3.999076157s" podCreationTimestamp="2026-02-21 07:01:55 +0000 UTC" firstStartedPulling="2026-02-21 07:01:56.960730406 +0000 UTC m=+891.993814604" lastFinishedPulling="2026-02-21 07:01:58.384289406 +0000 UTC m=+893.417373604" observedRunningTime="2026-02-21 07:01:58.996016402 +0000 UTC m=+894.029100590" watchObservedRunningTime="2026-02-21 07:01:58.999076157 +0000 UTC m=+894.032160355" Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.018470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" event={"ID":"b7cb4a9f-82fd-41b1-8175-351de45fde99","Type":"ContainerStarted","Data":"90e413e1e124987e3863df0ab0b6743273f75184bfffb82952c4e6841aee29ba"} Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.019061 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:02:03 crc kubenswrapper[4820]: I0221 07:02:03.060281 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" podStartSLOduration=2.126359579 podStartE2EDuration="6.060245802s" podCreationTimestamp="2026-02-21 07:01:57 +0000 UTC" firstStartedPulling="2026-02-21 07:01:58.195177732 +0000 UTC m=+893.228261940" lastFinishedPulling="2026-02-21 07:02:02.129063925 +0000 UTC m=+897.162148163" observedRunningTime="2026-02-21 07:02:03.057826156 +0000 UTC m=+898.090910364" watchObservedRunningTime="2026-02-21 07:02:03.060245802 +0000 UTC m=+898.093330000" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.308303 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.309666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.336304 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.377999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.478879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.479773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.496903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"certified-operators-4zcpg\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.524319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.524375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.562221 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.634350 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:05 crc kubenswrapper[4820]: I0221 07:02:05.985980 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:06 crc kubenswrapper[4820]: I0221 07:02:06.067049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerStarted","Data":"69079c7c5acb5ab328f11ec4cd84c2f068ef88fc99cfbcdd5b34b69d1ea69418"} Feb 21 07:02:06 crc kubenswrapper[4820]: I0221 07:02:06.116596 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.077279 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" exitCode=0 Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.077398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6"} Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.784096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-l85mk" Feb 21 07:02:07 crc kubenswrapper[4820]: I0221 07:02:07.889329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.086588 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" exitCode=0 Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.086791 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llbg7" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" containerID="cri-o://f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" gracePeriod=2 Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.087606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24"} Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.493720 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.645470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") pod \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\" (UID: \"fe66c064-e1f1-4efe-b7a8-4aeae6504817\") " Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.646648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities" (OuterVolumeSpecName: "utilities") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.651473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l" (OuterVolumeSpecName: "kube-api-access-vgr9l") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "kube-api-access-vgr9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.747353 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.747399 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgr9l\" (UniqueName: \"kubernetes.io/projected/fe66c064-e1f1-4efe-b7a8-4aeae6504817-kube-api-access-vgr9l\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.895064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe66c064-e1f1-4efe-b7a8-4aeae6504817" (UID: "fe66c064-e1f1-4efe-b7a8-4aeae6504817"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:08 crc kubenswrapper[4820]: I0221 07:02:08.950300 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe66c064-e1f1-4efe-b7a8-4aeae6504817-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095484 4820 generic.go:334] "Generic (PLEG): container finished" podID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" exitCode=0 Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbg7" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbg7" event={"ID":"fe66c064-e1f1-4efe-b7a8-4aeae6504817","Type":"ContainerDied","Data":"952313b28cd6f53641c8f8ee679f85d42b6fa78ed8b197b1cdd78e7ea0cde5a8"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.095974 4820 scope.go:117] "RemoveContainer" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.099287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerStarted","Data":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.109664 4820 scope.go:117] "RemoveContainer" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.120961 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zcpg" podStartSLOduration=2.69308084 podStartE2EDuration="4.120946037s" podCreationTimestamp="2026-02-21 07:02:05 +0000 UTC" firstStartedPulling="2026-02-21 07:02:07.07943379 +0000 UTC m=+902.112517998" lastFinishedPulling="2026-02-21 07:02:08.507298997 +0000 UTC m=+903.540383195" observedRunningTime="2026-02-21 07:02:09.118808838 +0000 UTC m=+904.151893046" watchObservedRunningTime="2026-02-21 07:02:09.120946037 +0000 UTC m=+904.154030235" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.135658 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.139475 4820 scope.go:117] "RemoveContainer" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.147420 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llbg7"] Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.153558 4820 scope.go:117] "RemoveContainer" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.153965 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": container with ID starting with f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6 not found: ID does not exist" containerID="f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154024 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6"} err="failed to get container status \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": rpc error: code = NotFound desc = could not find container \"f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6\": container with ID starting with f95db48116e55d5b5d5b0e170f555bf5dcca8a69e5f9a9cc5aabd77765daa0b6 not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154061 4820 scope.go:117] "RemoveContainer" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.154751 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": container with ID starting with 07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f not found: ID does not exist" containerID="07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154782 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f"} err="failed to get container status \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": rpc error: code = NotFound desc = could not find container \"07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f\": container with ID starting with 07bc624cf18e5cef646f24a641a161f3d78d33be8c25a3967d7f67c872c52e5f not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.154805 4820 scope.go:117] "RemoveContainer" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: E0221 07:02:09.155083 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": container with ID starting with 440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f not found: ID does not exist" containerID="440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.155114 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f"} err="failed to get container status \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": rpc error: code = NotFound desc = could not find container \"440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f\": container with ID starting with 440fa02fe52c22200884107001e3c8e56a32b30698ae2cbd4729ee0de491f80f not found: ID does not exist" Feb 21 07:02:09 crc kubenswrapper[4820]: I0221 07:02:09.705356 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" path="/var/lib/kubelet/pods/fe66c064-e1f1-4efe-b7a8-4aeae6504817/volumes" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.635211 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.636582 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:15 crc kubenswrapper[4820]: I0221 07:02:15.673822 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:16 crc kubenswrapper[4820]: I0221 07:02:16.185980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:16 crc kubenswrapper[4820]: I0221 07:02:16.221416 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.158136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zcpg" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" containerID="cri-o://f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" gracePeriod=2 Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.541964 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.677916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") pod \"1e07490b-0050-483c-8e03-bb915735b22a\" (UID: \"1e07490b-0050-483c-8e03-bb915735b22a\") " Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.678716 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities" (OuterVolumeSpecName: "utilities") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.686977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj" (OuterVolumeSpecName: "kube-api-access-7tgdj") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "kube-api-access-7tgdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.731623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e07490b-0050-483c-8e03-bb915735b22a" (UID: "1e07490b-0050-483c-8e03-bb915735b22a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780031 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780157 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e07490b-0050-483c-8e03-bb915735b22a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:18 crc kubenswrapper[4820]: I0221 07:02:18.780401 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tgdj\" (UniqueName: \"kubernetes.io/projected/1e07490b-0050-483c-8e03-bb915735b22a-kube-api-access-7tgdj\") on node \"crc\" DevicePath \"\"" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167535 4820 generic.go:334] "Generic (PLEG): container finished" podID="1e07490b-0050-483c-8e03-bb915735b22a" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" exitCode=0 Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167600 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zcpg" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zcpg" event={"ID":"1e07490b-0050-483c-8e03-bb915735b22a","Type":"ContainerDied","Data":"69079c7c5acb5ab328f11ec4cd84c2f068ef88fc99cfbcdd5b34b69d1ea69418"} Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.167643 4820 scope.go:117] "RemoveContainer" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.194312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.197991 4820 scope.go:117] "RemoveContainer" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.198931 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zcpg"] Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.214647 4820 scope.go:117] "RemoveContainer" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.231738 4820 scope.go:117] "RemoveContainer" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232137 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": container with ID starting with f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a not found: ID does not exist" containerID="f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232200 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a"} err="failed to get container status \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": rpc error: code = NotFound desc = could not find container \"f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a\": container with ID starting with f810eca6d4d6cef11ef0fe869c2714326bcbeb4ed1a4fd92b8f8b625f928203a not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232262 4820 scope.go:117] "RemoveContainer" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232565 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": container with ID starting with 87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24 not found: ID does not exist" containerID="87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232596 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24"} err="failed to get container status \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": rpc error: code = NotFound desc = could not find container \"87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24\": container with ID starting with 87839908a8ca91f52698d32b255728977156153b8317d6c36182307d7f2d2a24 not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232618 4820 scope.go:117] "RemoveContainer" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: E0221 07:02:19.232879 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": container with ID starting with ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6 not found: ID does not exist" containerID="ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.232928 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6"} err="failed to get container status \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": rpc error: code = NotFound desc = could not find container \"ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6\": container with ID starting with ab9a3e8d5736aefca6155499c204c27292d05cc56b081aefa08427aad1346cc6 not found: ID does not exist" Feb 21 07:02:19 crc kubenswrapper[4820]: I0221 07:02:19.703270 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e07490b-0050-483c-8e03-bb915735b22a" path="/var/lib/kubelet/pods/1e07490b-0050-483c-8e03-bb915735b22a/volumes" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.628975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629762 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629776 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629814 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629823 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629841 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="extract-content" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629866 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629874 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: E0221 07:02:47.629886 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.629894 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="extract-utilities" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630045 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e07490b-0050-483c-8e03-bb915735b22a" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630060 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe66c064-e1f1-4efe-b7a8-4aeae6504817" containerName="registry-server" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.630514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.635832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7wlqx" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.637061 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.638172 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.645209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.650582 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.655208 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.656304 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.657984 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n425l" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.658396 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jc8vd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.692569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.719011 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.719776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.723099 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tk9hs" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.744725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.749985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.755112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.770652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t5npw" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.796361 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.802040 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.808019 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.813604 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.814331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.814818 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818153 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xx9d" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818376 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.818531 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-smzkg" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.833957 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.834987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.840391 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.841253 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.843529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h92lv" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.851665 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.853861 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.862320 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.863083 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.869796 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.870886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.871510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bdntt" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.875642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z7nvp" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.890973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnq6\" (UniqueName: \"kubernetes.io/projected/f8b2e5d3-e795-4971-92d9-f0d8f6586fa8-kube-api-access-8bnq6\") pod \"designate-operator-controller-manager-6d8bf5c495-7fq9h\" (UID: \"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.899666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszvx\" (UniqueName: \"kubernetes.io/projected/76209e29-400d-4677-85b5-89c5f4e9323a-kube-api-access-kszvx\") pod \"barbican-operator-controller-manager-868647ff47-lrgjm\" (UID: \"76209e29-400d-4677-85b5-89c5f4e9323a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.911338 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.912625 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwl42\" (UniqueName: \"kubernetes.io/projected/3c9c6322-ba57-47b3-a079-ab86a6660c45-kube-api-access-gwl42\") pod \"cinder-operator-controller-manager-5d946d989d-lx4sd\" (UID: \"3c9c6322-ba57-47b3-a079-ab86a6660c45\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.922972 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.923785 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.927111 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-js42f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.935841 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.947855 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.954960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955119 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.955929 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.964291 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.965072 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.968911 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.977467 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nz7jt" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.986814 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:02:47 crc kubenswrapper[4820]: I0221 07:02:47.998060 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7c9\" (UniqueName: \"kubernetes.io/projected/f8cd79d8-6ba2-467c-95b5-4d965d73ed75-kube-api-access-qr7c9\") pod \"glance-operator-controller-manager-77987464f4-gbtvh\" (UID: \"f8cd79d8-6ba2-467c-95b5-4d965d73ed75\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.012895 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.014001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.017815 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lg84l" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.033677 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.035317 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.036257 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.037822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsgn\" (UniqueName: \"kubernetes.io/projected/a4f64d1a-4768-48e1-8a88-fbf906956528-kube-api-access-9dsgn\") pod \"heat-operator-controller-manager-69f49c598c-tlx7z\" (UID: \"a4f64d1a-4768-48e1-8a88-fbf906956528\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.044665 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8k5rm" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.057635 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.058093 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.058136 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:48.558122314 +0000 UTC m=+943.591206512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.058314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.070778 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.101901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbbvk\" (UniqueName: \"kubernetes.io/projected/4f343be8-a654-43ac-938a-6b726caab1ad-kube-api-access-dbbvk\") pod \"ironic-operator-controller-manager-554564d7fc-fj4tn\" (UID: \"4f343be8-a654-43ac-938a-6b726caab1ad\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.105375 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.105886 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.106193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/7ab15a3b-5688-4d42-b99a-e88bb8b11f65-kube-api-access-pcrgj\") pod \"horizon-operator-controller-manager-5b9b8895d5-t6t6b\" (UID: \"7ab15a3b-5688-4d42-b99a-e88bb8b11f65\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.113047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctj4\" (UniqueName: \"kubernetes.io/projected/903ed1dc-819c-4ed9-86f6-ca32e4f96792-kube-api-access-vctj4\") pod \"keystone-operator-controller-manager-b4d948c87-lgdx6\" (UID: \"903ed1dc-819c-4ed9-86f6-ca32e4f96792\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.114867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hds88\" (UniqueName: \"kubernetes.io/projected/047df55d-9730-4215-bbd5-73fd59a0e9f5-kube-api-access-hds88\") pod \"manila-operator-controller-manager-54f6768c69-pbn9f\" (UID: \"047df55d-9730-4215-bbd5-73fd59a0e9f5\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.119349 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.120093 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.129975 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-szg66" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.136823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dqv\" (UniqueName: \"kubernetes.io/projected/2ae82741-a73e-4d45-852f-a206550cb1e9-kube-api-access-d8dqv\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.158531 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.159888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.175533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.175717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cbvg5" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.207286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4l4\" (UniqueName: \"kubernetes.io/projected/9ec17569-aac1-4b58-8efc-b5a483e47a71-kube-api-access-6h4l4\") pod \"neutron-operator-controller-manager-64ddbf8bb-lzhqv\" (UID: \"9ec17569-aac1-4b58-8efc-b5a483e47a71\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.208130 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.212909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpbs\" (UniqueName: \"kubernetes.io/projected/b248c78b-0213-4833-8d04-7d2514c2e673-kube-api-access-rdpbs\") pod \"mariadb-operator-controller-manager-6994f66f48-gxpq6\" (UID: \"b248c78b-0213-4833-8d04-7d2514c2e673\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.252162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261532 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261600 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.261942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.272449 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.273313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.279642 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-phlw6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.282728 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.299805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgk9\" (UniqueName: \"kubernetes.io/projected/2b4b6741-5442-4ef0-a8e1-49e389157cd4-kube-api-access-swgk9\") pod \"nova-operator-controller-manager-567668f5cf-c96wv\" (UID: \"2b4b6741-5442-4ef0-a8e1-49e389157cd4\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.304364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9dd\" (UniqueName: \"kubernetes.io/projected/d922fcc6-f8a7-451a-b998-fc04189a6d85-kube-api-access-nz9dd\") pod \"octavia-operator-controller-manager-69f8888797-54dzd\" (UID: \"d922fcc6-f8a7-451a-b998-fc04189a6d85\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.325827 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.358994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.392991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.393500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.394778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.394903 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.393903 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.396093 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:48.895883335 +0000 UTC m=+943.928967533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.409521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.418373 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.431650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.437740 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2rktp" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.437980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bhv\" (UniqueName: \"kubernetes.io/projected/c4453479-1bc9-4393-8853-396ec6ae4f7f-kube-api-access-66bhv\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.438887 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.439407 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.444256 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.455301 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdbq\" (UniqueName: \"kubernetes.io/projected/9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1-kube-api-access-mqdbq\") pod \"ovn-operator-controller-manager-d44cf6b75-2dfxn\" (UID: \"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.459579 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.460471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.464652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jmpnv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.477868 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.485359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.489971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.497813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.525554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfzqx\" (UniqueName: \"kubernetes.io/projected/18cf798f-3eea-4e15-8bb1-bda4895ffed4-kube-api-access-tfzqx\") pod \"placement-operator-controller-manager-8497b45c89-n6dpn\" (UID: \"18cf798f-3eea-4e15-8bb1-bda4895ffed4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.539344 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.540339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.551658 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xk2hv" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.562254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.568027 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.569117 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.569948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.570690 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dgrsg" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.590717 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.598919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.598998 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.599082 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.599066185 +0000 UTC m=+944.632150383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.601612 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.606216 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.608111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.614376 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cvbk4" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.614565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.615294 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.615432 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.642475 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.643356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.647527 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.661410 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vjgjh" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.700514 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.718979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmzd\" (UniqueName: \"kubernetes.io/projected/246cc20b-aa24-4c15-8eb7-659e10b21e92-kube-api-access-xlmzd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-jdxhc\" (UID: \"246cc20b-aa24-4c15-8eb7-659e10b21e92\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.750121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h"] Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.753464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfddc\" (UniqueName: \"kubernetes.io/projected/412bd84a-46bb-49b9-8d0a-17d6cc683ea0-kube-api-access-xfddc\") pod \"swift-operator-controller-manager-68f46476f-cv9cl\" (UID: \"412bd84a-46bb-49b9-8d0a-17d6cc683ea0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.766695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.785165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802342 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.802662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804227 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804293 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.304276909 +0000 UTC m=+944.337361107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.804987 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.805024 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.30501382 +0000 UTC m=+944.338098018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.830076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8zbj\" (UniqueName: \"kubernetes.io/projected/b425a24f-112c-4e36-a173-21a59ce15ef0-kube-api-access-s8zbj\") pod \"test-operator-controller-manager-7866795846-whrpt\" (UID: \"b425a24f-112c-4e36-a173-21a59ce15ef0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.830715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j28h\" (UniqueName: \"kubernetes.io/projected/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-kube-api-access-4j28h\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.841326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mrx\" (UniqueName: \"kubernetes.io/projected/ee323e4c-82c4-4b71-b69b-5aef22e36516-kube-api-access-l6mrx\") pod \"watcher-operator-controller-manager-5db88f68c-jt2g2\" (UID: \"ee323e4c-82c4-4b71-b69b-5aef22e36516\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.904305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.904526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.904700 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: E0221 07:02:48.904750 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:49.904729998 +0000 UTC m=+944.937814196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.915974 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.925226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs87x\" (UniqueName: \"kubernetes.io/projected/fde95ed3-63bc-4401-b8b8-539da71db026-kube-api-access-vs87x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wv5gr\" (UID: \"fde95ed3-63bc-4401-b8b8-539da71db026\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:48 crc kubenswrapper[4820]: I0221 07:02:48.929860 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.054503 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.227517 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.316000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.316096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316214 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316279 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:50.316261508 +0000 UTC m=+945.349345706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316289 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.316358 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:50.3163408 +0000 UTC m=+945.349424998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.333331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.358451 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f64d1a_4768_48e1_8a88_fbf906956528.slice/crio-2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705 WatchSource:0}: Error finding container 2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705: Status 404 returned error can't find the container with id 2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705 Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.360793 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.365200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" event={"ID":"76209e29-400d-4677-85b5-89c5f4e9323a","Type":"ContainerStarted","Data":"4d66cbc3a7a6b364dc1dc6a92cf7ae7e2f06554f15e521a5d7d46d3b0f20e0de"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.367955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" event={"ID":"f8cd79d8-6ba2-467c-95b5-4d965d73ed75","Type":"ContainerStarted","Data":"f7b933ff31f42e50bfce992517a26b26b8af281d2e8643ce1d50bb087202ddf8"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.369446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" event={"ID":"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8","Type":"ContainerStarted","Data":"f19deb5a5b775ac444d24c0d4822d017593bd49e6a481e4bbabbf9fd6cb7a3e0"} Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.369888 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.531587 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.569322 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod903ed1dc_819c_4ed9_86f6_ca32e4f96792.slice/crio-8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0 WatchSource:0}: Error finding container 8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0: Status 404 returned error can't find the container with id 8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0 Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.570287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.594149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.624369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.624484 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.624519 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:51.624506702 +0000 UTC m=+946.657590900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.763884 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.800522 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4b6741_5442_4ef0_a8e1_49e389157cd4.slice/crio-2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8 WatchSource:0}: Error finding container 2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8: Status 404 returned error can't find the container with id 2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.801400 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec17569_aac1_4b58_8efc_b5a483e47a71.slice/crio-6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936 WatchSource:0}: Error finding container 6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936: Status 404 returned error can't find the container with id 6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.802691 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb425a24f_112c_4e36_a173_21a59ce15ef0.slice/crio-71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e WatchSource:0}: Error finding container 71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e: Status 404 returned error can't find the container with id 71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.810541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv"] Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.812125 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd922fcc6_f8a7_451a_b998_fc04189a6d85.slice/crio-914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700 WatchSource:0}: Error finding container 914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700: Status 404 returned error can't find the container with id 914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700 Feb 21 07:02:49 crc kubenswrapper[4820]: W0221 07:02:49.820136 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047df55d_9730_4215_bbd5_73fd59a0e9f5.slice/crio-238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7 WatchSource:0}: Error finding container 238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7: Status 404 returned error can't find the container with id 238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7 Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.823356 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hds88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-pbn9f_openstack-operators(047df55d-9730-4215-bbd5-73fd59a0e9f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.824543 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.824895 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.838849 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-whrpt"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.844840 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd"] Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.847545 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vs87x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wv5gr_openstack-operators(fde95ed3-63bc-4401-b8b8-539da71db026): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.849294 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.852384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.860777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.931829 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.931980 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: E0221 07:02:49.932028 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:51.932014956 +0000 UTC m=+946.965099154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.950370 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.960641 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.969535 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc"] Feb 21 07:02:49 crc kubenswrapper[4820]: I0221 07:02:49.997069 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2"] Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.010142 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqdbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-2dfxn_openstack-operators(9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.011916 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.014602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn"] Feb 21 07:02:50 crc kubenswrapper[4820]: W0221 07:02:50.028550 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod246cc20b_aa24_4c15_8eb7_659e10b21e92.slice/crio-55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535 WatchSource:0}: Error finding container 55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535: Status 404 returned error can't find the container with id 55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535 Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.037831 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6mrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-jt2g2_openstack-operators(ee323e4c-82c4-4b71-b69b-5aef22e36516): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.038283 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlmzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-jdxhc_openstack-operators(246cc20b-aa24-4c15-8eb7-659e10b21e92): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.038343 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfzqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-n6dpn_openstack-operators(18cf798f-3eea-4e15-8bb1-bda4895ffed4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.040357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.040399 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.046728 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.337743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.337882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338044 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338094 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:52.338079496 +0000 UTC m=+947.371163694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338350 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.338430 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:52.338421655 +0000 UTC m=+947.371505853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.401724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" event={"ID":"a4f64d1a-4768-48e1-8a88-fbf906956528","Type":"ContainerStarted","Data":"2d7704a09a082fca33ed3d28d0fb40c3739bb04e903dcd859da5c1e13327b705"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.403400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" event={"ID":"412bd84a-46bb-49b9-8d0a-17d6cc683ea0","Type":"ContainerStarted","Data":"dd299d62dc5c35118e7e6265e9cd896a73d81feac775240edd74923068b3298f"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.405696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" event={"ID":"2b4b6741-5442-4ef0-a8e1-49e389157cd4","Type":"ContainerStarted","Data":"2b9d7512cdf3318cd08907c77852817a19ae8aa07f75271a73e862325dcd09c8"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.431051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" event={"ID":"903ed1dc-819c-4ed9-86f6-ca32e4f96792","Type":"ContainerStarted","Data":"8cdbbaa051b8506fa2e3d8861abbf091d57f7b8c13107727c7ead7ac9763dab0"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.432864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" event={"ID":"4f343be8-a654-43ac-938a-6b726caab1ad","Type":"ContainerStarted","Data":"d9a5908311079faa3baaaeb94ffdddf091c57f657f0fd7a55b40753d269b5223"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.434444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" event={"ID":"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1","Type":"ContainerStarted","Data":"7eff22a697ed68495406267845fb52e365ce4dd2f95235d58e179c2936821aec"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.454143 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.458622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" event={"ID":"7ab15a3b-5688-4d42-b99a-e88bb8b11f65","Type":"ContainerStarted","Data":"61f5b235390f23cc566913940568830592187ea0068dd0f8cf4fc5c0a317b3c2"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.464531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" event={"ID":"ee323e4c-82c4-4b71-b69b-5aef22e36516","Type":"ContainerStarted","Data":"3e7fd551c96c2715af94302735fcbe6de47cad5e9f3b780eb3bef8d7facbb894"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.466539 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.467059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" event={"ID":"3c9c6322-ba57-47b3-a079-ab86a6660c45","Type":"ContainerStarted","Data":"3ea44f73cec2b4e1192dc3d93a76184096c293bb16b09a199bf3b56d99a755af"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.468052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" event={"ID":"047df55d-9730-4215-bbd5-73fd59a0e9f5","Type":"ContainerStarted","Data":"238c7fa321833b1fe70625d1048d9834384f190e14b773185df4a3e7beb269d7"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.473487 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.475126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" event={"ID":"d922fcc6-f8a7-451a-b998-fc04189a6d85","Type":"ContainerStarted","Data":"914d345c3e27fb7208b33060f65e210b75374c68f5bc41809d4531cac0652700"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.477141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" event={"ID":"b425a24f-112c-4e36-a173-21a59ce15ef0","Type":"ContainerStarted","Data":"71427176ffb9b4df65ddbf2ac59d17d9ef25b79e33f52af8f86b412cfcd7956e"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.490547 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" event={"ID":"246cc20b-aa24-4c15-8eb7-659e10b21e92","Type":"ContainerStarted","Data":"55359df701475acd9773ca510be877e78c4e3df4e5458ed15018be8c60a8c535"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.492275 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.493315 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" event={"ID":"fde95ed3-63bc-4401-b8b8-539da71db026","Type":"ContainerStarted","Data":"c9626e57cda54e43b32583b2356d1f5fd7d32112ea4e926be95e177a15695cc3"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.496396 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.497659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" event={"ID":"9ec17569-aac1-4b58-8efc-b5a483e47a71","Type":"ContainerStarted","Data":"6f567157c276856ef25fd2b1dd5e118e4274e305fbdd8bd8c037fe9b1a4c8936"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.504744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" event={"ID":"b248c78b-0213-4833-8d04-7d2514c2e673","Type":"ContainerStarted","Data":"929fb73c0f97e8b5d983502beb60707e955b7c1090b1c172e8d0983704755121"} Feb 21 07:02:50 crc kubenswrapper[4820]: I0221 07:02:50.518370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" event={"ID":"18cf798f-3eea-4e15-8bb1-bda4895ffed4","Type":"ContainerStarted","Data":"db96f1c5bb4c0a627bc2a549d99a0ada9bd2b0d99f8231e397c5e41e941157f9"} Feb 21 07:02:50 crc kubenswrapper[4820]: E0221 07:02:50.519756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.539948 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podUID="9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540305 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podUID="18cf798f-3eea-4e15-8bb1-bda4895ffed4" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540346 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podUID="ee323e4c-82c4-4b71-b69b-5aef22e36516" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540543 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podUID="246cc20b-aa24-4c15-8eb7-659e10b21e92" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540635 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podUID="fde95ed3-63bc-4401-b8b8-539da71db026" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.540750 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podUID="047df55d-9730-4215-bbd5-73fd59a0e9f5" Feb 21 07:02:51 crc kubenswrapper[4820]: I0221 07:02:51.675356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.675603 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.675671 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:55.675642922 +0000 UTC m=+950.708727120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: I0221 07:02:51.982092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.982514 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:51 crc kubenswrapper[4820]: E0221 07:02:51.982555 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:02:55.982542879 +0000 UTC m=+951.015627077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: I0221 07:02:52.389640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:52 crc kubenswrapper[4820]: I0221 07:02:52.389759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389812 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389886 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:56.389868243 +0000 UTC m=+951.422952441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.389930 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:52 crc kubenswrapper[4820]: E0221 07:02:52.390003 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:02:56.389987307 +0000 UTC m=+951.423071505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:02:55 crc kubenswrapper[4820]: I0221 07:02:55.750324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:02:55 crc kubenswrapper[4820]: E0221 07:02:55.750525 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:55 crc kubenswrapper[4820]: E0221 07:02:55.750812 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:03.75079029 +0000 UTC m=+958.783874508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.055045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.055273 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.055321 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.055305772 +0000 UTC m=+959.088389970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.458532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:56 crc kubenswrapper[4820]: I0221 07:02:56.458640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.458820 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.458887 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.458867373 +0000 UTC m=+959.491951571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.459625 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 21 07:02:56 crc kubenswrapper[4820]: E0221 07:02:56.459679 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:04.459670566 +0000 UTC m=+959.492754764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "webhook-server-cert" not found Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.633839 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.634256 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdpbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-gxpq6_openstack-operators(b248c78b-0213-4833-8d04-7d2514c2e673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:02 crc kubenswrapper[4820]: E0221 07:03:02.636283 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podUID="b248c78b-0213-4833-8d04-7d2514c2e673" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.110088 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.110288 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8zbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-whrpt_openstack-operators(b425a24f-112c-4e36-a173-21a59ce15ef0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.111480 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podUID="b425a24f-112c-4e36-a173-21a59ce15ef0" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.575720 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.575985 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz9dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-54dzd_openstack-operators(d922fcc6-f8a7-451a-b998-fc04189a6d85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.577363 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podUID="d922fcc6-f8a7-451a-b998-fc04189a6d85" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.612756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podUID="b248c78b-0213-4833-8d04-7d2514c2e673" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.612800 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podUID="b425a24f-112c-4e36-a173-21a59ce15ef0" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.613512 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podUID="d922fcc6-f8a7-451a-b998-fc04189a6d85" Feb 21 07:03:03 crc kubenswrapper[4820]: I0221 07:03:03.762955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.763207 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 21 07:03:03 crc kubenswrapper[4820]: E0221 07:03:03.763292 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert podName:2ae82741-a73e-4d45-852f-a206550cb1e9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:19.763273796 +0000 UTC m=+974.796357994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert") pod "infra-operator-controller-manager-79d975b745-qvl8t" (UID: "2ae82741-a73e-4d45-852f-a206550cb1e9") : secret "infra-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.066268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.066423 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.066483 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert podName:c4453479-1bc9-4393-8853-396ec6ae4f7f nodeName:}" failed. No retries permitted until 2026-02-21 07:03:20.066464882 +0000 UTC m=+975.099549080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" (UID: "c4453479-1bc9-4393-8853-396ec6ae4f7f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.198712 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.199012 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vctj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-lgdx6_openstack-operators(903ed1dc-819c-4ed9-86f6-ca32e4f96792): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.200945 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podUID="903ed1dc-819c-4ed9-86f6-ca32e4f96792" Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.472670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.472765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.472904 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.472957 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs podName:5424a0f0-819f-46e7-9d7d-00bbe249e4a9 nodeName:}" failed. No retries permitted until 2026-02-21 07:03:20.472943563 +0000 UTC m=+975.506027761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-f84fz" (UID: "5424a0f0-819f-46e7-9d7d-00bbe249e4a9") : secret "metrics-server-cert" not found Feb 21 07:03:04 crc kubenswrapper[4820]: I0221 07:03:04.491640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.618207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podUID="903ed1dc-819c-4ed9-86f6-ca32e4f96792" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.887614 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.887816 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swgk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-c96wv_openstack-operators(2b4b6741-5442-4ef0-a8e1-49e389157cd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:03:04 crc kubenswrapper[4820]: E0221 07:03:04.889804 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podUID="2b4b6741-5442-4ef0-a8e1-49e389157cd4" Feb 21 07:03:05 crc kubenswrapper[4820]: E0221 07:03:05.623733 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podUID="2b4b6741-5442-4ef0-a8e1-49e389157cd4" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.646935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" event={"ID":"f8b2e5d3-e795-4971-92d9-f0d8f6586fa8","Type":"ContainerStarted","Data":"3efb1612d0921340df29f5c3de8d0b4f622aa74662d0906671c5c6ccc58a542b"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.648278 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.655044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" event={"ID":"76209e29-400d-4677-85b5-89c5f4e9323a","Type":"ContainerStarted","Data":"6d9b9afc82feef2c8d9a1dddcc7082fc801798bad04ecb345400daf07ec14804"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.655654 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.657425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" event={"ID":"4f343be8-a654-43ac-938a-6b726caab1ad","Type":"ContainerStarted","Data":"e04b135d38644b180cc79043c61f6d78d85c06447ffb8552469cfdae68b5eabe"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.657746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.659179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" event={"ID":"9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1","Type":"ContainerStarted","Data":"2fbf00e2d12cda0513ad1a2600bfb6a7b8f5bcb89fa784dbb8909ded05a8bbd9"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.659563 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.660823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" event={"ID":"ee323e4c-82c4-4b71-b69b-5aef22e36516","Type":"ContainerStarted","Data":"5818006b1ac29dcaf3924d7b396a1dd51f4e95219b56af950dbb238eaf7a3e6a"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.661163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.662700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" event={"ID":"412bd84a-46bb-49b9-8d0a-17d6cc683ea0","Type":"ContainerStarted","Data":"75f82a51ac53be46bdb753d07d47a1b23c7dc37ea5b88ea20f7463ce69e15bff"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.663034 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.677031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" event={"ID":"f8cd79d8-6ba2-467c-95b5-4d965d73ed75","Type":"ContainerStarted","Data":"7c7cdaa465dad51ea90a3c70f72d57d4edc51c585f666bfea58f1b53a11dd3c4"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.677201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.681425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" event={"ID":"246cc20b-aa24-4c15-8eb7-659e10b21e92","Type":"ContainerStarted","Data":"aa0ff828d3aec8afe89ccbc3c96b86080552e5b1886ba11e6da68e6c03ea4bd3"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.681612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.683654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" event={"ID":"a4f64d1a-4768-48e1-8a88-fbf906956528","Type":"ContainerStarted","Data":"cdd4555cd1536c798acec8a30a3778bfbc0197323d6eee9b50d35f749f5d38b6"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.683707 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.687168 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" podStartSLOduration=4.319142608 podStartE2EDuration="19.687149745s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.992677905 +0000 UTC m=+945.025762103" lastFinishedPulling="2026-02-21 07:03:05.360685042 +0000 UTC m=+960.393769240" observedRunningTime="2026-02-21 07:03:07.686066895 +0000 UTC m=+962.719151093" watchObservedRunningTime="2026-02-21 07:03:07.687149745 +0000 UTC m=+962.720233943" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.688483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" event={"ID":"7ab15a3b-5688-4d42-b99a-e88bb8b11f65","Type":"ContainerStarted","Data":"7fbe38ad2de90b11fef6c18fe9745fa8ec471ac0e911e7f92b48ebf626fa0bf7"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.688885 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.712455 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" podStartSLOduration=4.616994117 podStartE2EDuration="20.712429217s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:48.755419883 +0000 UTC m=+943.788504081" lastFinishedPulling="2026-02-21 07:03:04.850854983 +0000 UTC m=+959.883939181" observedRunningTime="2026-02-21 07:03:07.671859387 +0000 UTC m=+962.704943585" watchObservedRunningTime="2026-02-21 07:03:07.712429217 +0000 UTC m=+962.745513415" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.716317 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" podStartSLOduration=4.25492438 podStartE2EDuration="20.716301082s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.009998449 +0000 UTC m=+945.043082637" lastFinishedPulling="2026-02-21 07:03:06.471375141 +0000 UTC m=+961.504459339" observedRunningTime="2026-02-21 07:03:07.714559595 +0000 UTC m=+962.747643793" watchObservedRunningTime="2026-02-21 07:03:07.716301082 +0000 UTC m=+962.749385280" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.729571 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" event={"ID":"9ec17569-aac1-4b58-8efc-b5a483e47a71","Type":"ContainerStarted","Data":"8218a4a7be71393ab0b35aeab3a95b224ec7185135c8c8fd2207d6939d5e0f4d"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.730438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.751900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" podStartSLOduration=3.957540904 podStartE2EDuration="20.751879336s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.592310091 +0000 UTC m=+944.625394289" lastFinishedPulling="2026-02-21 07:03:06.386648523 +0000 UTC m=+961.419732721" observedRunningTime="2026-02-21 07:03:07.746230292 +0000 UTC m=+962.779314490" watchObservedRunningTime="2026-02-21 07:03:07.751879336 +0000 UTC m=+962.784963534" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.767494 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" podStartSLOduration=4.663592071 podStartE2EDuration="20.767474633s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.255470954 +0000 UTC m=+944.288555152" lastFinishedPulling="2026-02-21 07:03:05.359353516 +0000 UTC m=+960.392437714" observedRunningTime="2026-02-21 07:03:07.760482251 +0000 UTC m=+962.793566449" watchObservedRunningTime="2026-02-21 07:03:07.767474633 +0000 UTC m=+962.800558851" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.773301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" event={"ID":"3c9c6322-ba57-47b3-a079-ab86a6660c45","Type":"ContainerStarted","Data":"97a93a3ab3298b861e7917b549aacf53c604fe8d8181182d6bef844181a1c607"} Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.773836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.779052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" podStartSLOduration=3.355233225 podStartE2EDuration="19.779041329s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.037609824 +0000 UTC m=+945.070694032" lastFinishedPulling="2026-02-21 07:03:06.461417938 +0000 UTC m=+961.494502136" observedRunningTime="2026-02-21 07:03:07.77761039 +0000 UTC m=+962.810694598" watchObservedRunningTime="2026-02-21 07:03:07.779041329 +0000 UTC m=+962.812125527" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.841682 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" podStartSLOduration=3.415446973 podStartE2EDuration="19.841666673s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.037906442 +0000 UTC m=+945.070990640" lastFinishedPulling="2026-02-21 07:03:06.464126142 +0000 UTC m=+961.497210340" observedRunningTime="2026-02-21 07:03:07.840008597 +0000 UTC m=+962.873092795" watchObservedRunningTime="2026-02-21 07:03:07.841666673 +0000 UTC m=+962.874750871" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.843772 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" podStartSLOduration=4.845736406 podStartE2EDuration="20.84376487s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.361321411 +0000 UTC m=+944.394405609" lastFinishedPulling="2026-02-21 07:03:05.359349875 +0000 UTC m=+960.392434073" observedRunningTime="2026-02-21 07:03:07.81307277 +0000 UTC m=+962.846156968" watchObservedRunningTime="2026-02-21 07:03:07.84376487 +0000 UTC m=+962.876849068" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.857166 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" podStartSLOduration=4.318381367 podStartE2EDuration="20.857147416s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.345855088 +0000 UTC m=+944.378939286" lastFinishedPulling="2026-02-21 07:03:05.884621137 +0000 UTC m=+960.917705335" observedRunningTime="2026-02-21 07:03:07.856344444 +0000 UTC m=+962.889428642" watchObservedRunningTime="2026-02-21 07:03:07.857147416 +0000 UTC m=+962.890231614" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.872975 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" podStartSLOduration=5.839496744 podStartE2EDuration="20.872955308s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.817376729 +0000 UTC m=+944.850460927" lastFinishedPulling="2026-02-21 07:03:04.850835283 +0000 UTC m=+959.883919491" observedRunningTime="2026-02-21 07:03:07.871612372 +0000 UTC m=+962.904696570" watchObservedRunningTime="2026-02-21 07:03:07.872955308 +0000 UTC m=+962.906039506" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.925573 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" podStartSLOduration=3.886095859 podStartE2EDuration="20.925555088s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.368964619 +0000 UTC m=+944.402048817" lastFinishedPulling="2026-02-21 07:03:06.408423808 +0000 UTC m=+961.441508046" observedRunningTime="2026-02-21 07:03:07.896477772 +0000 UTC m=+962.929561970" watchObservedRunningTime="2026-02-21 07:03:07.925555088 +0000 UTC m=+962.958639286" Feb 21 07:03:07 crc kubenswrapper[4820]: I0221 07:03:07.929547 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" podStartSLOduration=4.14446856 podStartE2EDuration="20.929530777s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.623114554 +0000 UTC m=+944.656198752" lastFinishedPulling="2026-02-21 07:03:06.408176771 +0000 UTC m=+961.441260969" observedRunningTime="2026-02-21 07:03:07.91684462 +0000 UTC m=+962.949928818" watchObservedRunningTime="2026-02-21 07:03:07.929530777 +0000 UTC m=+962.962614965" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.798429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" event={"ID":"18cf798f-3eea-4e15-8bb1-bda4895ffed4","Type":"ContainerStarted","Data":"416fcdccb11164b43d740b4b5d61c319f1d9403c1c679cb44ab47bdd46867e95"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.800463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" event={"ID":"fde95ed3-63bc-4401-b8b8-539da71db026","Type":"ContainerStarted","Data":"b1192e218f12fc5600249dff73a993446695f0c802cc54411c087ade54e19d94"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.802131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" event={"ID":"047df55d-9730-4215-bbd5-73fd59a0e9f5","Type":"ContainerStarted","Data":"6baf86d25604e57fcb0255cefe43d0b3f2a6212c279f009faa700c2e140acfb1"} Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.802301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.836077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" podStartSLOduration=3.5746306580000002 podStartE2EDuration="24.836061792s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.82219867 +0000 UTC m=+944.855282868" lastFinishedPulling="2026-02-21 07:03:11.083629804 +0000 UTC m=+966.116714002" observedRunningTime="2026-02-21 07:03:11.835218358 +0000 UTC m=+966.868302576" watchObservedRunningTime="2026-02-21 07:03:11.836061792 +0000 UTC m=+966.869145990" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.839139 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" podStartSLOduration=3.793654939 podStartE2EDuration="24.839132945s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:50.038177689 +0000 UTC m=+945.071261887" lastFinishedPulling="2026-02-21 07:03:11.083655705 +0000 UTC m=+966.116739893" observedRunningTime="2026-02-21 07:03:11.821871263 +0000 UTC m=+966.854955461" watchObservedRunningTime="2026-02-21 07:03:11.839132945 +0000 UTC m=+966.872217143" Feb 21 07:03:11 crc kubenswrapper[4820]: I0221 07:03:11.858687 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wv5gr" podStartSLOduration=2.560237253 podStartE2EDuration="23.85866744s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.84741267 +0000 UTC m=+944.880496868" lastFinishedPulling="2026-02-21 07:03:11.145842847 +0000 UTC m=+966.178927055" observedRunningTime="2026-02-21 07:03:11.852953604 +0000 UTC m=+966.886037802" watchObservedRunningTime="2026-02-21 07:03:11.85866744 +0000 UTC m=+966.891751638" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.844268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" event={"ID":"b248c78b-0213-4833-8d04-7d2514c2e673","Type":"ContainerStarted","Data":"45b992f75781678fd8292861d5a3b08d45baa97eaa8e9537383efd48c7712d5d"} Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.845316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.847119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" event={"ID":"b425a24f-112c-4e36-a173-21a59ce15ef0","Type":"ContainerStarted","Data":"369f6c95ea59095a532c1b2410684bb776de55541e04a12e74ecdb16090ec5a1"} Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.847281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.858640 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" podStartSLOduration=3.495121143 podStartE2EDuration="30.858622143s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.801755602 +0000 UTC m=+944.834839800" lastFinishedPulling="2026-02-21 07:03:17.165256602 +0000 UTC m=+972.198340800" observedRunningTime="2026-02-21 07:03:17.85665461 +0000 UTC m=+972.889738828" watchObservedRunningTime="2026-02-21 07:03:17.858622143 +0000 UTC m=+972.891706341" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.872985 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" podStartSLOduration=2.524044033 podStartE2EDuration="29.872960105s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816361311 +0000 UTC m=+944.849445499" lastFinishedPulling="2026-02-21 07:03:17.165277363 +0000 UTC m=+972.198361571" observedRunningTime="2026-02-21 07:03:17.86837111 +0000 UTC m=+972.901455308" watchObservedRunningTime="2026-02-21 07:03:17.872960105 +0000 UTC m=+972.906044303" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.959632 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-lrgjm" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.976061 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lx4sd" Feb 21 07:03:17 crc kubenswrapper[4820]: I0221 07:03:17.990299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7fq9h" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.060627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-gbtvh" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.111154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tlx7z" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.163455 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t6t6b" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.256819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fj4tn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.415911 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-pbn9f" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.446311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-lzhqv" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.573618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-2dfxn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.606675 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.608887 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6dpn" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.770252 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cv9cl" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.789540 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-jdxhc" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.855114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" event={"ID":"903ed1dc-819c-4ed9-86f6-ca32e4f96792","Type":"ContainerStarted","Data":"cdc349be90fbaebf78e8e8079ade1530f0921606722048af69812815eab14d4e"} Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.855293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.856774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" event={"ID":"d922fcc6-f8a7-451a-b998-fc04189a6d85","Type":"ContainerStarted","Data":"51ad6e3259f98f41ed77a7903e284798ad94f809c3733e9222ef28a185412b01"} Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.857109 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.887263 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" podStartSLOduration=3.363906742 podStartE2EDuration="31.887227937s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.58170655 +0000 UTC m=+944.614790748" lastFinishedPulling="2026-02-21 07:03:18.105027745 +0000 UTC m=+973.138111943" observedRunningTime="2026-02-21 07:03:18.868815343 +0000 UTC m=+973.901899541" watchObservedRunningTime="2026-02-21 07:03:18.887227937 +0000 UTC m=+973.920312135" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.889503 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" podStartSLOduration=3.6002155780000002 podStartE2EDuration="31.889493538s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816262278 +0000 UTC m=+944.849346476" lastFinishedPulling="2026-02-21 07:03:18.105540218 +0000 UTC m=+973.138624436" observedRunningTime="2026-02-21 07:03:18.884889022 +0000 UTC m=+973.917973220" watchObservedRunningTime="2026-02-21 07:03:18.889493538 +0000 UTC m=+973.922577736" Feb 21 07:03:18 crc kubenswrapper[4820]: I0221 07:03:18.933051 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-jt2g2" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.804346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.813820 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae82741-a73e-4d45-852f-a206550cb1e9-cert\") pod \"infra-operator-controller-manager-79d975b745-qvl8t\" (UID: \"2ae82741-a73e-4d45-852f-a206550cb1e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.863476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" event={"ID":"2b4b6741-5442-4ef0-a8e1-49e389157cd4","Type":"ContainerStarted","Data":"366caa71f0aa3cbab5987da3dfb49df8274a416a7727bcfa3782ffcca2111cc2"} Feb 21 07:03:19 crc kubenswrapper[4820]: I0221 07:03:19.983998 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xx9d" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.021161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.129001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.141000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4453479-1bc9-4393-8853-396ec6ae4f7f-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf\" (UID: \"c4453479-1bc9-4393-8853-396ec6ae4f7f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.241362 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t"] Feb 21 07:03:20 crc kubenswrapper[4820]: W0221 07:03:20.253446 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae82741_a73e_4d45_852f_a206550cb1e9.slice/crio-911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4 WatchSource:0}: Error finding container 911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4: Status 404 returned error can't find the container with id 911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4 Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.382001 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cbvg5" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.395581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.534327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.542616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5424a0f0-819f-46e7-9d7d-00bbe249e4a9-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-f84fz\" (UID: \"5424a0f0-819f-46e7-9d7d-00bbe249e4a9\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.627261 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf"] Feb 21 07:03:20 crc kubenswrapper[4820]: W0221 07:03:20.632180 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4453479_1bc9_4393_8853_396ec6ae4f7f.slice/crio-23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755 WatchSource:0}: Error finding container 23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755: Status 404 returned error can't find the container with id 23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755 Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.840474 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cvbk4" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.849109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.870595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" event={"ID":"2ae82741-a73e-4d45-852f-a206550cb1e9","Type":"ContainerStarted","Data":"911f8d447197d86f1226bd08893ebefa84718e47422f6e41fea5daadc181ccd4"} Feb 21 07:03:20 crc kubenswrapper[4820]: I0221 07:03:20.871895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" event={"ID":"c4453479-1bc9-4393-8853-396ec6ae4f7f","Type":"ContainerStarted","Data":"23bb1cb300d31a513840454ce99d329808f1e0da6d8a475f24bf0282d5202755"} Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.256916 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz"] Feb 21 07:03:21 crc kubenswrapper[4820]: W0221 07:03:21.258263 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5424a0f0_819f_46e7_9d7d_00bbe249e4a9.slice/crio-4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404 WatchSource:0}: Error finding container 4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404: Status 404 returned error can't find the container with id 4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404 Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.880702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" event={"ID":"5424a0f0-819f-46e7-9d7d-00bbe249e4a9","Type":"ContainerStarted","Data":"4f1f35f6aae1488e4981235f60624d5f39a723253fe5a67044d6cd280cb1e404"} Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.880771 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:03:21 crc kubenswrapper[4820]: I0221 07:03:21.896524 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" podStartSLOduration=5.585497161 podStartE2EDuration="34.896503387s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:02:49.816486514 +0000 UTC m=+944.849570702" lastFinishedPulling="2026-02-21 07:03:19.12749273 +0000 UTC m=+974.160576928" observedRunningTime="2026-02-21 07:03:21.892729804 +0000 UTC m=+976.925814002" watchObservedRunningTime="2026-02-21 07:03:21.896503387 +0000 UTC m=+976.929587585" Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.913442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" event={"ID":"5424a0f0-819f-46e7-9d7d-00bbe249e4a9","Type":"ContainerStarted","Data":"238def0c128f64bdd62a932bd33e3221602451cf84d633811f4b0dafb85d55c7"} Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.913985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:26 crc kubenswrapper[4820]: I0221 07:03:26.947774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" podStartSLOduration=38.947750651 podStartE2EDuration="38.947750651s" podCreationTimestamp="2026-02-21 07:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:03:26.93597742 +0000 UTC m=+981.969061618" watchObservedRunningTime="2026-02-21 07:03:26.947750651 +0000 UTC m=+981.980834849" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.285853 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lgdx6" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.442313 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gxpq6" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.482181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-c96wv" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.492808 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-54dzd" Feb 21 07:03:28 crc kubenswrapper[4820]: I0221 07:03:28.919383 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-whrpt" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.931656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" event={"ID":"2ae82741-a73e-4d45-852f-a206550cb1e9","Type":"ContainerStarted","Data":"edf491468e9dca651aff1278be210b4e198227f49dec2734353deac9141c0189"} Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.933096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.934038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" event={"ID":"c4453479-1bc9-4393-8853-396ec6ae4f7f","Type":"ContainerStarted","Data":"5365ed08bcb909816ae09e97f0deef291904e77b7305621ff9678689b30831dd"} Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.934131 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.952607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" podStartSLOduration=34.386371238 podStartE2EDuration="42.952587158s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:03:20.255538186 +0000 UTC m=+975.288622394" lastFinishedPulling="2026-02-21 07:03:28.821754116 +0000 UTC m=+983.854838314" observedRunningTime="2026-02-21 07:03:29.947555541 +0000 UTC m=+984.980639749" watchObservedRunningTime="2026-02-21 07:03:29.952587158 +0000 UTC m=+984.985671356" Feb 21 07:03:29 crc kubenswrapper[4820]: I0221 07:03:29.981854 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" podStartSLOduration=34.796753388 podStartE2EDuration="42.981836846s" podCreationTimestamp="2026-02-21 07:02:47 +0000 UTC" firstStartedPulling="2026-02-21 07:03:20.634038006 +0000 UTC m=+975.667122204" lastFinishedPulling="2026-02-21 07:03:28.819121464 +0000 UTC m=+983.852205662" observedRunningTime="2026-02-21 07:03:29.970749114 +0000 UTC m=+985.003833322" watchObservedRunningTime="2026-02-21 07:03:29.981836846 +0000 UTC m=+985.014921044" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.027399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qvl8t" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.401399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf" Feb 21 07:03:40 crc kubenswrapper[4820]: I0221 07:03:40.857879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-f84fz" Feb 21 07:03:43 crc kubenswrapper[4820]: I0221 07:03:43.816255 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:03:43 crc kubenswrapper[4820]: I0221 07:03:43.816314 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.400851 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.403888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.405696 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fjv69" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.410698 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.410732 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.412799 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.419926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.481992 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.489855 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.489959 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.492513 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.553778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.553891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655116 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.655339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.656194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.676641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"dnsmasq-dns-855cbc58c5-9mgmj\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.721227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.757181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.758591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.759372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.775536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"dnsmasq-dns-6fcf94d689-pdmt8\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.808807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.953442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:55 crc kubenswrapper[4820]: I0221 07:03:55.958634 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.252007 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:56 crc kubenswrapper[4820]: W0221 07:03:56.256496 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d05b01_8b86_4bf7_9b3b_ed179f362f27.slice/crio-4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71 WatchSource:0}: Error finding container 4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71: Status 404 returned error can't find the container with id 4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71 Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.341197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" event={"ID":"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443","Type":"ContainerStarted","Data":"5080cc5ff51ef876cebda0dd6972095bb3db5861f03d8743a19d9051a2266fb4"} Feb 21 07:03:56 crc kubenswrapper[4820]: I0221 07:03:56.342200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" event={"ID":"c1d05b01-8b86-4bf7-9b3b-ed179f362f27","Type":"ContainerStarted","Data":"4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71"} Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.746279 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.801082 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.802524 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.809084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.896740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998316 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:57 crc kubenswrapper[4820]: I0221 07:03:57.998408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.000171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.003547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.051492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"dnsmasq-dns-6d6b9fdb89-b9hkw\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.165644 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.176970 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.201060 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.202765 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.211872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.219125 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.313134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.314733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.315073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.335576 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"dnsmasq-dns-67ff45466c-47ln4\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.573801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.787193 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:03:58 crc kubenswrapper[4820]: W0221 07:03:58.825904 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod924235f7_e875_49cd_b7c1_1cfa96515a97.slice/crio-5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c WatchSource:0}: Error finding container 5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c: Status 404 returned error can't find the container with id 5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.997791 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:03:58 crc kubenswrapper[4820]: I0221 07:03:58.999570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.001046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002108 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4n8x9" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002609 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002826 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.002939 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.004395 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.004532 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.011019 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.081296 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131414 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131432 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.131595 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.233524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234563 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234730 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.234886 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235404 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.235585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.239148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.239874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.253580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.259459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.270003 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.270640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.307163 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.308605 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312459 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312484 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312577 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312763 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312785 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312865 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bthw" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.312988 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.329492 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.343132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.396051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerStarted","Data":"89ede790e040e0e9c21f3a91218ea509876d44fee835aa305d75785ff546742f"} Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.397628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerStarted","Data":"5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c"} Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.436951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.436995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437198 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437350 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.437557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539781 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.539958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.540004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541526 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.541945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.547186 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.557599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.561506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.569280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.570979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.588658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"rabbitmq-server-0\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " pod="openstack/rabbitmq-server-0" Feb 21 07:03:59 crc kubenswrapper[4820]: I0221 07:03:59.643297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.627173 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.628326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.630362 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634300 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634313 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ldndf" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.634416 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.636081 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.639258 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.759984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.760096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861949 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.861971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862428 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.862684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.863852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.880815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893684 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.893870 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " pod="openstack/openstack-galera-0" Feb 21 07:04:00 crc kubenswrapper[4820]: I0221 07:04:00.960643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.091962 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.093020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.095256 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.095749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qq6hv" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.096023 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.096184 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.107548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.197757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.198647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.200308 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.200431 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vhpwj" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.203961 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.215203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.230891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.230959 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231770 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.231973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.333991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.334391 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.335694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.336118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.336254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.339489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.340343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.340854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.354988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.365305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"openstack-cell1-galera-0\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.412050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435717 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.435742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.436334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.437175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.438815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.440702 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.456900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"memcached-0\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " pod="openstack/memcached-0" Feb 21 07:04:02 crc kubenswrapper[4820]: I0221 07:04:02.517131 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.392838 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.394692 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.398157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-22vkm" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.411915 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.563009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.664696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.686459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"kube-state-metrics-0\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " pod="openstack/kube-state-metrics-0" Feb 21 07:04:04 crc kubenswrapper[4820]: I0221 07:04:04.716794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.606223 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.607649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.609119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hmdkm" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.609278 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.610790 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.613900 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.616395 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.625511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.665312 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706603 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706720 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706926 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.706960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810313 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810488 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.810543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811604 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.811891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.812139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.812962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.813177 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.813836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.815627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"ovn-controller-ovs-rwsk7\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828313 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.828581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"ovn-controller-sfpp9\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.928094 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:07 crc kubenswrapper[4820]: I0221 07:04:07.942690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.953432 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.964277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.965585 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966085 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qmlp6" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966184 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966408 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.966600 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 21 07:04:08 crc kubenswrapper[4820]: I0221 07:04:08.969919 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133540 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133946 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.133987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.134030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.134080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.235984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236460 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.236480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.237161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.237478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.248431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.255262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.257777 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.258190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.260658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:09 crc kubenswrapper[4820]: I0221 07:04:09.287878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.765771 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.768450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.771843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.771904 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dg8hr" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.772062 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.775556 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.781194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.865846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967498 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.967574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968133 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.968612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.969221 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.975459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.975899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.977618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.983429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:10 crc kubenswrapper[4820]: I0221 07:04:10.988891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:11 crc kubenswrapper[4820]: I0221 07:04:11.093564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.504050 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.504487 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbhzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fcf94d689-pdmt8_openstack(c1d05b01-8b86-4bf7-9b3b-ed179f362f27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.505820 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" podUID="c1d05b01-8b86-4bf7-9b3b-ed179f362f27" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.516573 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.516698 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgn5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-855cbc58c5-9mgmj_openstack(25fe0e65-8e41-4f6a-b4ba-499f9ffc6443): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:04:13 crc kubenswrapper[4820]: E0221 07:04:13.518047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" podUID="25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" Feb 21 07:04:13 crc kubenswrapper[4820]: I0221 07:04:13.816734 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:04:13 crc kubenswrapper[4820]: I0221 07:04:13.816962 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.103775 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.114760 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.135942 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.236761 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.243066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.327794 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.335060 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455bfe0a_a135_4900_8b15_ce584dc8a5bb.slice/crio-a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e WatchSource:0}: Error finding container a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e: Status 404 returned error can't find the container with id a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.377656 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.395469 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod593c6a26_a16a_4cf6_8aa9_b20bb6d56da7.slice/crio-a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe WatchSource:0}: Error finding container a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe: Status 404 returned error can't find the container with id a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.403129 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.410591 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81af4bd_d2af_4a26_8f4d_a3e612778607.slice/crio-11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a WatchSource:0}: Error finding container 11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a: Status 404 returned error can't find the container with id 11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.436814 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:04:14 crc kubenswrapper[4820]: W0221 07:04:14.441408 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7880da24_89a6_4428_b9c1_5ffe6647af01.slice/crio-675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93 WatchSource:0}: Error finding container 675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93: Status 404 returned error can't find the container with id 675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.503302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.505076 4820 generic.go:334] "Generic (PLEG): container finished" podID="85621024-c5dd-4598-817a-62024db91c1d" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" exitCode=0 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.505128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.506117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.507543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"506d7091e1481dd403657fac413ff300e649bdb874981551b296a055c67d3957"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.511364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerStarted","Data":"49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.512279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.513120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerStarted","Data":"a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.513892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"c7e2b7a7c0a492a7d1fe2c8d85d83a8801b3d4fa1ad893af52ea27c7826ffccc"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.515556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"77697e6f65480c0a8c7ecc85d340b2d52d583c5d92b5093accb994850dd6cd98"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.516607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerStarted","Data":"60eb280dafd317b213ced0ce92cb061208211ecad999bed743c8a76df9e0ad8d"} Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.518305 4820 generic.go:334] "Generic (PLEG): container finished" podID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" exitCode=0 Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.518381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6"} Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.766463 4820 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 21 07:04:14 crc kubenswrapper[4820]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 07:04:14 crc kubenswrapper[4820]: > podSandboxID="5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c" Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.766906 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:04:14 crc kubenswrapper[4820]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kc55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d6b9fdb89-b9hkw_openstack(924235f7-e875-49cd-b7c1-1cfa96515a97): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 07:04:14 crc kubenswrapper[4820]: > logger="UnhandledError" Feb 21 07:04:14 crc kubenswrapper[4820]: E0221 07:04:14.768085 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.881229 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.957755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") pod \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.957914 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") pod \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\" (UID: \"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443\") " Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.958211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config" (OuterVolumeSpecName: "config") pod "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" (UID: "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.958419 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:14 crc kubenswrapper[4820]: I0221 07:04:14.966926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m" (OuterVolumeSpecName: "kube-api-access-xgn5m") pod "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" (UID: "25fe0e65-8e41-4f6a-b4ba-499f9ffc6443"). InnerVolumeSpecName "kube-api-access-xgn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.019678 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.059904 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgn5m\" (UniqueName: \"kubernetes.io/projected/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443-kube-api-access-xgn5m\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.160750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") pod \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\" (UID: \"c1d05b01-8b86-4bf7-9b3b-ed179f362f27\") " Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.161689 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.161815 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config" (OuterVolumeSpecName: "config") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.166155 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr" (OuterVolumeSpecName: "kube-api-access-dbhzr") pod "c1d05b01-8b86-4bf7-9b3b-ed179f362f27" (UID: "c1d05b01-8b86-4bf7-9b3b-ed179f362f27"). InnerVolumeSpecName "kube-api-access-dbhzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.225027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262931 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbhzr\" (UniqueName: \"kubernetes.io/projected/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-kube-api-access-dbhzr\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262964 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.262973 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d05b01-8b86-4bf7-9b3b-ed179f362f27-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:15 crc kubenswrapper[4820]: W0221 07:04:15.489908 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf0c3ff8_e36f_4539_a7da_9d2b1e7a146d.slice/crio-b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2 WatchSource:0}: Error finding container b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2: Status 404 returned error can't find the container with id b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2 Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.531810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerStarted","Data":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.531872 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.537582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" event={"ID":"c1d05b01-8b86-4bf7-9b3b-ed179f362f27","Type":"ContainerDied","Data":"4eced59b9a2cc582ab81656f2dbcd3a050a75557bdd421766475cbe273279b71"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.537595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-pdmt8" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.539384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.541543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" event={"ID":"25fe0e65-8e41-4f6a-b4ba-499f9ffc6443","Type":"ContainerDied","Data":"5080cc5ff51ef876cebda0dd6972095bb3db5861f03d8743a19d9051a2266fb4"} Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.541573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-9mgmj" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.552823 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" podStartSLOduration=3.046876297 podStartE2EDuration="17.552804343s" podCreationTimestamp="2026-02-21 07:03:58 +0000 UTC" firstStartedPulling="2026-02-21 07:03:59.093994781 +0000 UTC m=+1014.127078969" lastFinishedPulling="2026-02-21 07:04:13.599922817 +0000 UTC m=+1028.633007015" observedRunningTime="2026-02-21 07:04:15.548200358 +0000 UTC m=+1030.581284556" watchObservedRunningTime="2026-02-21 07:04:15.552804343 +0000 UTC m=+1030.585888541" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.632601 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.649491 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-9mgmj"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.675364 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.682249 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-pdmt8"] Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.712486 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fe0e65-8e41-4f6a-b4ba-499f9ffc6443" path="/var/lib/kubelet/pods/25fe0e65-8e41-4f6a-b4ba-499f9ffc6443/volumes" Feb 21 07:04:15 crc kubenswrapper[4820]: I0221 07:04:15.713121 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d05b01-8b86-4bf7-9b3b-ed179f362f27" path="/var/lib/kubelet/pods/c1d05b01-8b86-4bf7-9b3b-ed179f362f27/volumes" Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.590071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerStarted","Data":"a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff"} Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.590452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 21 07:04:21 crc kubenswrapper[4820]: I0221 07:04:21.612179 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.118880023 podStartE2EDuration="19.612157773s" podCreationTimestamp="2026-02-21 07:04:02 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.258622917 +0000 UTC m=+1029.291707115" lastFinishedPulling="2026-02-21 07:04:20.751900667 +0000 UTC m=+1035.784984865" observedRunningTime="2026-02-21 07:04:21.608074761 +0000 UTC m=+1036.641158959" watchObservedRunningTime="2026-02-21 07:04:21.612157773 +0000 UTC m=+1036.645241971" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.598305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerStarted","Data":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.598913 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.601606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerStarted","Data":"baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.601957 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.603945 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.606845 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d" exitCode=0 Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.606938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.608387 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.610330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.614280 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.619018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerStarted","Data":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.619520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.625853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podStartSLOduration=10.875907662 podStartE2EDuration="25.62582821s" podCreationTimestamp="2026-02-21 07:03:57 +0000 UTC" firstStartedPulling="2026-02-21 07:03:58.83170594 +0000 UTC m=+1013.864790138" lastFinishedPulling="2026-02-21 07:04:13.581626488 +0000 UTC m=+1028.614710686" observedRunningTime="2026-02-21 07:04:22.620757681 +0000 UTC m=+1037.653841889" watchObservedRunningTime="2026-02-21 07:04:22.62582821 +0000 UTC m=+1037.658912408" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.627446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf"} Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.692713 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sfpp9" podStartSLOduration=8.734206698 podStartE2EDuration="15.692696562s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.405063599 +0000 UTC m=+1029.438147797" lastFinishedPulling="2026-02-21 07:04:21.363553463 +0000 UTC m=+1036.396637661" observedRunningTime="2026-02-21 07:04:22.684718165 +0000 UTC m=+1037.717802373" watchObservedRunningTime="2026-02-21 07:04:22.692696562 +0000 UTC m=+1037.725780760" Feb 21 07:04:22 crc kubenswrapper[4820]: I0221 07:04:22.709799 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.358194176 podStartE2EDuration="18.709778308s" podCreationTimestamp="2026-02-21 07:04:04 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.116874392 +0000 UTC m=+1029.149958610" lastFinishedPulling="2026-02-21 07:04:21.468458544 +0000 UTC m=+1036.501542742" observedRunningTime="2026-02-21 07:04:22.70509685 +0000 UTC m=+1037.738181048" watchObservedRunningTime="2026-02-21 07:04:22.709778308 +0000 UTC m=+1037.742862506" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.574866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.679042 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.680595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6"} Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.680741 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerStarted","Data":"355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73"} Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.681202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.681266 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.709934 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rwsk7" podStartSLOduration=10.287186731 podStartE2EDuration="16.709916337s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.443367804 +0000 UTC m=+1029.476452002" lastFinishedPulling="2026-02-21 07:04:20.86609741 +0000 UTC m=+1035.899181608" observedRunningTime="2026-02-21 07:04:23.703214634 +0000 UTC m=+1038.736298832" watchObservedRunningTime="2026-02-21 07:04:23.709916337 +0000 UTC m=+1038.743000535" Feb 21 07:04:23 crc kubenswrapper[4820]: I0221 07:04:23.718466 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.725061 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerStarted","Data":"087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.727195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerStarted","Data":"9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030"} Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.727286 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" containerID="cri-o://8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" gracePeriod=10 Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.751803 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.509104581999999 podStartE2EDuration="17.751787734s" podCreationTimestamp="2026-02-21 07:04:07 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.337180539 +0000 UTC m=+1029.370264737" lastFinishedPulling="2026-02-21 07:04:23.579863691 +0000 UTC m=+1038.612947889" observedRunningTime="2026-02-21 07:04:24.744710371 +0000 UTC m=+1039.777794569" watchObservedRunningTime="2026-02-21 07:04:24.751787734 +0000 UTC m=+1039.784871932" Feb 21 07:04:24 crc kubenswrapper[4820]: I0221 07:04:24.769931 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.687732947 podStartE2EDuration="15.769916328s" podCreationTimestamp="2026-02-21 07:04:09 +0000 UTC" firstStartedPulling="2026-02-21 07:04:15.492673263 +0000 UTC m=+1030.525757461" lastFinishedPulling="2026-02-21 07:04:23.574856644 +0000 UTC m=+1038.607940842" observedRunningTime="2026-02-21 07:04:24.764350186 +0000 UTC m=+1039.797434384" watchObservedRunningTime="2026-02-21 07:04:24.769916328 +0000 UTC m=+1039.803000526" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.270422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423259 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.423295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") pod \"924235f7-e875-49cd-b7c1-1cfa96515a97\" (UID: \"924235f7-e875-49cd-b7c1-1cfa96515a97\") " Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.428711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55" (OuterVolumeSpecName: "kube-api-access-6kc55") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "kube-api-access-6kc55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.464841 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.482303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config" (OuterVolumeSpecName: "config") pod "924235f7-e875-49cd-b7c1-1cfa96515a97" (UID: "924235f7-e875-49cd-b7c1-1cfa96515a97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kc55\" (UniqueName: \"kubernetes.io/projected/924235f7-e875-49cd-b7c1-1cfa96515a97-kube-api-access-6kc55\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525454 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.525463 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/924235f7-e875-49cd-b7c1-1cfa96515a97-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746063 4820 generic.go:334] "Generic (PLEG): container finished" podID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" exitCode=0 Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6b9fdb89-b9hkw" event={"ID":"924235f7-e875-49cd-b7c1-1cfa96515a97","Type":"ContainerDied","Data":"5efec124cb6c6d1841f29bef1e715c600ce87eea191019eb95128737b62cd64c"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.746404 4820 scope.go:117] "RemoveContainer" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.748083 4820 generic.go:334] "Generic (PLEG): container finished" podID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerID="4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1" exitCode=0 Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.748103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1"} Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.828186 4820 scope.go:117] "RemoveContainer" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.850622 4820 scope.go:117] "RemoveContainer" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: E0221 07:04:25.852481 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": container with ID starting with 8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac not found: ID does not exist" containerID="8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.852507 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac"} err="failed to get container status \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": rpc error: code = NotFound desc = could not find container \"8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac\": container with ID starting with 8a962b7e88ed103a7d63c3cacd865ab54cd6d1821840edf6cf3e043aae62caac not found: ID does not exist" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.852527 4820 scope.go:117] "RemoveContainer" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: E0221 07:04:25.853083 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": container with ID starting with 552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6 not found: ID does not exist" containerID="552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.853100 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6"} err="failed to get container status \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": rpc error: code = NotFound desc = could not find container \"552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6\": container with ID starting with 552ce85e54fe800a5fc401a82ef02a624fa6a031ce0bc07368eaed2f0ec3fcd6 not found: ID does not exist" Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.861426 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:25 crc kubenswrapper[4820]: I0221 07:04:25.868603 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d6b9fdb89-b9hkw"] Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.093860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.094073 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.131435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.755986 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerID="5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf" exitCode=0 Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.756046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf"} Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.759963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerStarted","Data":"437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25"} Feb 21 07:04:26 crc kubenswrapper[4820]: I0221 07:04:26.806471 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.352523127 podStartE2EDuration="25.806432033s" podCreationTimestamp="2026-02-21 07:04:01 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.412712588 +0000 UTC m=+1029.445796786" lastFinishedPulling="2026-02-21 07:04:20.866621504 +0000 UTC m=+1035.899705692" observedRunningTime="2026-02-21 07:04:26.798001323 +0000 UTC m=+1041.831085541" watchObservedRunningTime="2026-02-21 07:04:26.806432033 +0000 UTC m=+1041.839516231" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.288797 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.321214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.519537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.710729 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" path="/var/lib/kubelet/pods/924235f7-e875-49cd-b7c1-1cfa96515a97/volumes" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.770998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerStarted","Data":"8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7"} Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.771299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.798461 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.560146597 podStartE2EDuration="28.798442991s" podCreationTimestamp="2026-02-21 07:03:59 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.123114522 +0000 UTC m=+1029.156198730" lastFinishedPulling="2026-02-21 07:04:21.361410926 +0000 UTC m=+1036.394495124" observedRunningTime="2026-02-21 07:04:27.788409968 +0000 UTC m=+1042.821494166" watchObservedRunningTime="2026-02-21 07:04:27.798442991 +0000 UTC m=+1042.831527189" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.805972 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.812593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968263 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:27 crc kubenswrapper[4820]: E0221 07:04:27.968635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: E0221 07:04:27.968663 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="init" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968669 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="init" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.968835 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="924235f7-e875-49cd-b7c1-1cfa96515a97" containerName="dnsmasq-dns" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.969859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.973334 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 21 07:04:27 crc kubenswrapper[4820]: I0221 07:04:27.988746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.008342 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.009345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.012193 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.023792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.084792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.097836 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:28 crc kubenswrapper[4820]: E0221 07:04:28.098411 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-phbmt ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" podUID="acd01fc7-7058-41a4-b8f6-7d5cb3626330" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.123262 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.124465 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.127132 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.136161 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186632 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.186992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187160 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.187973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.188899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.190882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.204938 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.219584 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228043 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.228989 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.229168 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-45s9c" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.240990 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.272904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"dnsmasq-dns-57bdd75c-rrbhp\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.289975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.290335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.291537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.291699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.292449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.295106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.296267 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.305444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"ovn-controller-metrics-p2v97\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.332830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391713 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.391982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392124 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.392712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393327 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.393900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.411424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"dnsmasq-dns-75b7bcc64f-frqzv\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.441326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494396 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.494629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.498183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.504704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.504925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.521058 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.521095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.524694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.530952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"ovn-northd-0\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.551459 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.783774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.797649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.861958 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902850 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.902920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") pod \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\" (UID: \"acd01fc7-7058-41a4-b8f6-7d5cb3626330\") " Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.903188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.903491 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.904336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.905173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config" (OuterVolumeSpecName: "config") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.912942 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt" (OuterVolumeSpecName: "kube-api-access-phbmt") pod "acd01fc7-7058-41a4-b8f6-7d5cb3626330" (UID: "acd01fc7-7058-41a4-b8f6-7d5cb3626330"). InnerVolumeSpecName "kube-api-access-phbmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:28 crc kubenswrapper[4820]: I0221 07:04:28.947494 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:28 crc kubenswrapper[4820]: W0221 07:04:28.964545 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c30e7c_2c39_4e47_a120_d3da3367497e.slice/crio-5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f WatchSource:0}: Error finding container 5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f: Status 404 returned error can't find the container with id 5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005204 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phbmt\" (UniqueName: \"kubernetes.io/projected/acd01fc7-7058-41a4-b8f6-7d5cb3626330-kube-api-access-phbmt\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005606 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.005616 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd01fc7-7058-41a4-b8f6-7d5cb3626330-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.033270 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:04:29 crc kubenswrapper[4820]: W0221 07:04:29.033645 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5b71e95_fe49_48b2_8d7b_575e17855d52.slice/crio-604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2 WatchSource:0}: Error finding container 604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2: Status 404 returned error can't find the container with id 604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2 Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.790871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerStarted","Data":"5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.791816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.792651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerStarted","Data":"7d34608592e5bad3ce2cdbb838b7f2d91070fccc15c351f0f966dcae95c21a16"} Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.792714 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-rrbhp" Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.831100 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:29 crc kubenswrapper[4820]: I0221 07:04:29.837106 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-rrbhp"] Feb 21 07:04:30 crc kubenswrapper[4820]: I0221 07:04:30.962551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 21 07:04:30 crc kubenswrapper[4820]: I0221 07:04:30.962945 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 21 07:04:31 crc kubenswrapper[4820]: I0221 07:04:31.706119 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd01fc7-7058-41a4-b8f6-7d5cb3626330" path="/var/lib/kubelet/pods/acd01fc7-7058-41a4-b8f6-7d5cb3626330/volumes" Feb 21 07:04:32 crc kubenswrapper[4820]: I0221 07:04:32.413227 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:32 crc kubenswrapper[4820]: I0221 07:04:32.413291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.686787 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.721041 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.747476 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.748802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.778413 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:34 crc kubenswrapper[4820]: I0221 07:04:34.913522 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.014998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.015029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.016176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.033376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"dnsmasq-dns-689df5d84f-mhcgl\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.081778 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.542265 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:04:35 crc kubenswrapper[4820]: W0221 07:04:35.557574 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97c27e55_f0a0_4253_b573_21c027992fe7.slice/crio-86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502 WatchSource:0}: Error finding container 86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502: Status 404 returned error can't find the container with id 86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502 Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.840778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerStarted","Data":"4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce"} Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.841709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerStarted","Data":"86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502"} Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.922118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.927870 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930407 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pfbp5" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930699 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.930808 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.932136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 21 07:04:35 crc kubenswrapper[4820]: I0221 07:04:35.945720 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030507 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030642 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.030672 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.131969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132156 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132189 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.132303 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:36.632222333 +0000 UTC m=+1051.665306531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132479 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.132175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.133851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.147601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.156171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.186165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.639858 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640094 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640124 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: E0221 07:04:36.640185 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:37.640166662 +0000 UTC m=+1052.673250860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.859143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.860678 4820 generic.go:334] "Generic (PLEG): container finished" podID="97c27e55-f0a0-4253-b573-21c027992fe7" containerID="b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf" exitCode=0 Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.860757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.862210 4820 generic.go:334] "Generic (PLEG): container finished" podID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerID="bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233" exitCode=0 Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.862283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerDied","Data":"bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233"} Feb 21 07:04:36 crc kubenswrapper[4820]: I0221 07:04:36.914643 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p2v97" podStartSLOduration=9.914625675 podStartE2EDuration="9.914625675s" podCreationTimestamp="2026-02-21 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:36.913081463 +0000 UTC m=+1051.946165671" watchObservedRunningTime="2026-02-21 07:04:36.914625675 +0000 UTC m=+1051.947709873" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.270634 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360528 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.360740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") pod \"44c30e7c-2c39-4e47-a120-d3da3367497e\" (UID: \"44c30e7c-2c39-4e47-a120-d3da3367497e\") " Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.365354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4" (OuterVolumeSpecName: "kube-api-access-vdpn4") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "kube-api-access-vdpn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.379686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config" (OuterVolumeSpecName: "config") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.381752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.383356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.400766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44c30e7c-2c39-4e47-a120-d3da3367497e" (UID: "44c30e7c-2c39-4e47-a120-d3da3367497e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463031 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpn4\" (UniqueName: \"kubernetes.io/projected/44c30e7c-2c39-4e47-a120-d3da3367497e-kube-api-access-vdpn4\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463456 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463534 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.463607 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c30e7c-2c39-4e47-a120-d3da3367497e-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.666651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666867 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666888 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.666944 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:39.666926657 +0000 UTC m=+1054.700010865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" event={"ID":"44c30e7c-2c39-4e47-a120-d3da3367497e","Type":"ContainerDied","Data":"5dfed0812d269a0a46b9b6e9bd46f39556a88550006da2d78bb689af4e13c33f"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871267 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-frqzv" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.871323 4820 scope.go:117] "RemoveContainer" containerID="bd28e2bf44f948e4e1770e722011315ebd1975ff95368e5558e96ac6107ba233" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.875100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerStarted","Data":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.875392 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.877159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerStarted","Data":"768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505"} Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.877459 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.925826 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.934286 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.297617092 podStartE2EDuration="9.934266466s" podCreationTimestamp="2026-02-21 07:04:28 +0000 UTC" firstStartedPulling="2026-02-21 07:04:29.036578509 +0000 UTC m=+1044.069662707" lastFinishedPulling="2026-02-21 07:04:36.673227883 +0000 UTC m=+1051.706312081" observedRunningTime="2026-02-21 07:04:37.919169234 +0000 UTC m=+1052.952253442" watchObservedRunningTime="2026-02-21 07:04:37.934266466 +0000 UTC m=+1052.967350684" Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.934335 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-frqzv"] Feb 21 07:04:37 crc kubenswrapper[4820]: E0221 07:04:37.936519 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:35416->38.102.83.201:43255: write tcp 38.102.83.201:35416->38.102.83.201:43255: write: broken pipe Feb 21 07:04:37 crc kubenswrapper[4820]: I0221 07:04:37.944299 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podStartSLOduration=3.944282029 podStartE2EDuration="3.944282029s" podCreationTimestamp="2026-02-21 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:37.937320279 +0000 UTC m=+1052.970404487" watchObservedRunningTime="2026-02-21 07:04:37.944282029 +0000 UTC m=+1052.977366227" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.410900 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.486037 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.695854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696162 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696196 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.696274 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:43.696253576 +0000 UTC m=+1058.729337774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.712733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" path="/var/lib/kubelet/pods/44c30e7c-2c39-4e47-a120-d3da3367497e/volumes" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.785520 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:39 crc kubenswrapper[4820]: E0221 07:04:39.786285 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.786314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.786598 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c30e7c-2c39-4e47-a120-d3da3367497e" containerName="init" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.787585 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.790530 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.790698 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.791283 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.795497 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921728 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921758 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.921912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.922002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:39 crc kubenswrapper[4820]: I0221 07:04:39.922028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023602 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.023624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.025597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.030524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.033206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.036699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.042690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.044693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.047952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"swift-ring-rebalance-rf689\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.137654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.562221 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:04:40 crc kubenswrapper[4820]: I0221 07:04:40.901390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerStarted","Data":"c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4"} Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.070452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.161300 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.164081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.165450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.169706 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.173082 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.244956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.245056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.346772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.346814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.347805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.390134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"root-account-create-update-mxq6b\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.485684 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:41 crc kubenswrapper[4820]: I0221 07:04:41.941989 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:41 crc kubenswrapper[4820]: W0221 07:04:41.951406 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a5ba110_ecad_46c8_8fc2_5dc5b3efaa21.slice/crio-81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77 WatchSource:0}: Error finding container 81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77: Status 404 returned error can't find the container with id 81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77 Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927185 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerID="6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c" exitCode=0 Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerDied","Data":"6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c"} Feb 21 07:04:42 crc kubenswrapper[4820]: I0221 07:04:42.927617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerStarted","Data":"81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77"} Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.624257 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.626493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.630852 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.700708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.701212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.701340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701516 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701555 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: E0221 07:04:43.701773 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:04:51.701751617 +0000 UTC m=+1066.734835815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.740296 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.741364 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.745373 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.772678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.803179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.803295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.805398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818745 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818828 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.818878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.819677 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.819738 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" gracePeriod=600 Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.848138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"keystone-db-create-6cfkd\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.883990 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.885575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.905759 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.906863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.906914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.963536 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.964949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.967807 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.970808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:43 crc kubenswrapper[4820]: I0221 07:04:43.980589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.008469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.008552 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.009088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.009271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.010583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.034869 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"keystone-b298-account-create-update-wh2wv\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.059611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.110949 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.111168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.112816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.130541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"placement-db-create-j8m4b\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.213238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.213327 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.214077 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.229291 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"placement-c8ba-account-create-update-wmp66\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.234474 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.290519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963083 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" exitCode=0 Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c"} Feb 21 07:04:44 crc kubenswrapper[4820]: I0221 07:04:44.963203 4820 scope.go:117] "RemoveContainer" containerID="71784da7c98d1c6a1f3631b050c692e6a08e77f49190060892784c827a17df19" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.114779 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.122222 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.203510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.203994 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" containerID="cri-o://ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" gracePeriod=10 Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.231749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") pod \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.231822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") pod \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\" (UID: \"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.232901 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" (UID: "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.233782 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.245648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv" (OuterVolumeSpecName: "kube-api-access-jrllv") pod "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" (UID: "5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21"). InnerVolumeSpecName "kube-api-access-jrllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.339680 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrllv\" (UniqueName: \"kubernetes.io/projected/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21-kube-api-access-jrllv\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.541820 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:04:45 crc kubenswrapper[4820]: W0221 07:04:45.544184 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb290d702_774e_48b8_a243_5a9c648740a7.slice/crio-6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7 WatchSource:0}: Error finding container 6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7: Status 404 returned error can't find the container with id 6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7 Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.606542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:04:45 crc kubenswrapper[4820]: W0221 07:04:45.607744 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8377d0c3_40a1_4a4a_b6c8_67f66dfa602d.slice/crio-c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c WatchSource:0}: Error finding container c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c: Status 404 returned error can't find the container with id c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.629944 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.764017 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.807306 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.950812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.950938 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.951008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") pod \"85621024-c5dd-4598-817a-62024db91c1d\" (UID: \"85621024-c5dd-4598-817a-62024db91c1d\") " Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.957577 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7" (OuterVolumeSpecName: "kube-api-access-2hjs7") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "kube-api-access-2hjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.975286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerStarted","Data":"3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.975331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerStarted","Data":"6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.982148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerStarted","Data":"ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.982189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerStarted","Data":"c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.988521 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config" (OuterVolumeSpecName: "config") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.990894 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxq6b" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.991608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxq6b" event={"ID":"5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21","Type":"ContainerDied","Data":"81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.991636 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ef6f82a94f1931f57338ccb0b4c2171277f43147f4b04a97263530b4aeab77" Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.998199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerStarted","Data":"fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63"} Feb 21 07:04:45 crc kubenswrapper[4820]: I0221 07:04:45.999418 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c8ba-account-create-update-wmp66" podStartSLOduration=2.999395452 podStartE2EDuration="2.999395452s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:45.993486462 +0000 UTC m=+1061.026570660" watchObservedRunningTime="2026-02-21 07:04:45.999395452 +0000 UTC m=+1061.032479650" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.004333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerStarted","Data":"51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.004450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerStarted","Data":"2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.009099 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerStarted","Data":"77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.009150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerStarted","Data":"ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.013381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85621024-c5dd-4598-817a-62024db91c1d" (UID: "85621024-c5dd-4598-817a-62024db91c1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015741 4820 generic.go:334] "Generic (PLEG): container finished" podID="85621024-c5dd-4598-817a-62024db91c1d" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" exitCode=0 Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" event={"ID":"85621024-c5dd-4598-817a-62024db91c1d","Type":"ContainerDied","Data":"89ede790e040e0e9c21f3a91218ea509876d44fee835aa305d75785ff546742f"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015828 4820 scope.go:117] "RemoveContainer" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.015942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-47ln4" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.016853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6cfkd" podStartSLOduration=3.016834158 podStartE2EDuration="3.016834158s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.006906058 +0000 UTC m=+1061.039990256" watchObservedRunningTime="2026-02-21 07:04:46.016834158 +0000 UTC m=+1061.049918356" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.032346 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-j8m4b" podStartSLOduration=3.032321031 podStartE2EDuration="3.032321031s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.024668752 +0000 UTC m=+1061.057752950" watchObservedRunningTime="2026-02-21 07:04:46.032321031 +0000 UTC m=+1061.065405229" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.036737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052678 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052709 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hjs7\" (UniqueName: \"kubernetes.io/projected/85621024-c5dd-4598-817a-62024db91c1d-kube-api-access-2hjs7\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.052720 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85621024-c5dd-4598-817a-62024db91c1d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.054077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rf689" podStartSLOduration=2.565535523 podStartE2EDuration="7.054041552s" podCreationTimestamp="2026-02-21 07:04:39 +0000 UTC" firstStartedPulling="2026-02-21 07:04:40.56998686 +0000 UTC m=+1055.603071058" lastFinishedPulling="2026-02-21 07:04:45.058492889 +0000 UTC m=+1060.091577087" observedRunningTime="2026-02-21 07:04:46.044561695 +0000 UTC m=+1061.077645893" watchObservedRunningTime="2026-02-21 07:04:46.054041552 +0000 UTC m=+1061.087125750" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.062022 4820 scope.go:117] "RemoveContainer" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.085077 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b298-account-create-update-wh2wv" podStartSLOduration=3.085057548 podStartE2EDuration="3.085057548s" podCreationTimestamp="2026-02-21 07:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:04:46.083404123 +0000 UTC m=+1061.116488321" watchObservedRunningTime="2026-02-21 07:04:46.085057548 +0000 UTC m=+1061.118141746" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.094304 4820 scope.go:117] "RemoveContainer" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: E0221 07:04:46.097486 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": container with ID starting with ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad not found: ID does not exist" containerID="ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.097526 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad"} err="failed to get container status \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": rpc error: code = NotFound desc = could not find container \"ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad\": container with ID starting with ef481ce0bfb90437d2040bc2bb82143c4d9c3d9dfde648efc3449e41543a48ad not found: ID does not exist" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.097552 4820 scope.go:117] "RemoveContainer" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.103294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:46 crc kubenswrapper[4820]: E0221 07:04:46.104964 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": container with ID starting with 9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e not found: ID does not exist" containerID="9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.105016 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e"} err="failed to get container status \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": rpc error: code = NotFound desc = could not find container \"9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e\": container with ID starting with 9301b46b05f86f61971c03a79af80ff76c3ab3b1f3b2551f061bf31b996de23e not found: ID does not exist" Feb 21 07:04:46 crc kubenswrapper[4820]: I0221 07:04:46.111334 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-47ln4"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.051380 4820 generic.go:334] "Generic (PLEG): container finished" podID="b290d702-774e-48b8-a243-5a9c648740a7" containerID="3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.051439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerDied","Data":"3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.054564 4820 generic.go:334] "Generic (PLEG): container finished" podID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerID="ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.054681 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerDied","Data":"ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.056508 4820 generic.go:334] "Generic (PLEG): container finished" podID="cf044875-b3ef-48f5-b802-1bd167de5685" containerID="51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.056598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerDied","Data":"51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.058748 4820 generic.go:334] "Generic (PLEG): container finished" podID="d781b010-be2e-465d-9789-d6188ac5a30e" containerID="77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16" exitCode=0 Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.058913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerDied","Data":"77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16"} Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670423 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670766 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="init" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670790 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="init" Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670827 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670834 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: E0221 07:04:47.670844 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.670851 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671022 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85621024-c5dd-4598-817a-62024db91c1d" containerName="dnsmasq-dns" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671031 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" containerName="mariadb-account-create-update" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.671738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.682267 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.705122 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85621024-c5dd-4598-817a-62024db91c1d" path="/var/lib/kubelet/pods/85621024-c5dd-4598-817a-62024db91c1d/volumes" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.765336 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.766449 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.769008 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.779935 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.782320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.782376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.883911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.883988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.884032 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.884059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.886190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.905196 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"glance-db-create-2x7vh\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.985449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.985516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:47 crc kubenswrapper[4820]: I0221 07:04:47.986260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.002219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"glance-cd19-account-create-update-ccc55\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.020776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.087071 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.477051 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.484158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.598414 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") pod \"cf044875-b3ef-48f5-b802-1bd167de5685\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.598877 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") pod \"cf044875-b3ef-48f5-b802-1bd167de5685\" (UID: \"cf044875-b3ef-48f5-b802-1bd167de5685\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") pod \"b290d702-774e-48b8-a243-5a9c648740a7\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599083 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") pod \"b290d702-774e-48b8-a243-5a9c648740a7\" (UID: \"b290d702-774e-48b8-a243-5a9c648740a7\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599934 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b290d702-774e-48b8-a243-5a9c648740a7" (UID: "b290d702-774e-48b8-a243-5a9c648740a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.599909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf044875-b3ef-48f5-b802-1bd167de5685" (UID: "cf044875-b3ef-48f5-b802-1bd167de5685"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.604103 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx" (OuterVolumeSpecName: "kube-api-access-jjxcx") pod "cf044875-b3ef-48f5-b802-1bd167de5685" (UID: "cf044875-b3ef-48f5-b802-1bd167de5685"). InnerVolumeSpecName "kube-api-access-jjxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.604160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25" (OuterVolumeSpecName: "kube-api-access-2kv25") pod "b290d702-774e-48b8-a243-5a9c648740a7" (UID: "b290d702-774e-48b8-a243-5a9c648740a7"). InnerVolumeSpecName "kube-api-access-2kv25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.613648 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.621831 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.624040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") pod \"d781b010-be2e-465d-9789-d6188ac5a30e\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") pod \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700917 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") pod \"d781b010-be2e-465d-9789-d6188ac5a30e\" (UID: \"d781b010-be2e-465d-9789-d6188ac5a30e\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.700955 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") pod \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\" (UID: \"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d\") " Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701445 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjxcx\" (UniqueName: \"kubernetes.io/projected/cf044875-b3ef-48f5-b802-1bd167de5685-kube-api-access-jjxcx\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701455 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" (UID: "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701462 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf044875-b3ef-48f5-b802-1bd167de5685-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701490 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kv25\" (UniqueName: \"kubernetes.io/projected/b290d702-774e-48b8-a243-5a9c648740a7-kube-api-access-2kv25\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701477 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d781b010-be2e-465d-9789-d6188ac5a30e" (UID: "d781b010-be2e-465d-9789-d6188ac5a30e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.701501 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b290d702-774e-48b8-a243-5a9c648740a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.705684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr" (OuterVolumeSpecName: "kube-api-access-5t9wr") pod "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" (UID: "8377d0c3-40a1-4a4a-b6c8-67f66dfa602d"). InnerVolumeSpecName "kube-api-access-5t9wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.705832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps" (OuterVolumeSpecName: "kube-api-access-d2gps") pod "d781b010-be2e-465d-9789-d6188ac5a30e" (UID: "d781b010-be2e-465d-9789-d6188ac5a30e"). InnerVolumeSpecName "kube-api-access-d2gps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.726167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.761742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:04:48 crc kubenswrapper[4820]: W0221 07:04:48.768489 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1974d89_b3a1_4cc5_b113_fb39248e5bf0.slice/crio-cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850 WatchSource:0}: Error finding container cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850: Status 404 returned error can't find the container with id cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850 Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.803127 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t9wr\" (UniqueName: \"kubernetes.io/projected/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-kube-api-access-5t9wr\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804551 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d781b010-be2e-465d-9789-d6188ac5a30e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804634 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:48 crc kubenswrapper[4820]: I0221 07:04:48.804717 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2gps\" (UniqueName: \"kubernetes.io/projected/d781b010-be2e-465d-9789-d6188ac5a30e-kube-api-access-d2gps\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.073762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerStarted","Data":"ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.074039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerStarted","Data":"8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6cfkd" event={"ID":"8377d0c3-40a1-4a4a-b6c8-67f66dfa602d","Type":"ContainerDied","Data":"c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076044 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3715fd2e582aad1ca2d36ee76fef77035e2827c6d6984f0a9fbebf8093fb91c" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.076062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6cfkd" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j8m4b" event={"ID":"cf044875-b3ef-48f5-b802-1bd167de5685","Type":"ContainerDied","Data":"2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078181 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb3997f67c3fc260d305425e7a58e7f1b3efb875f6d7e2dd0a4d15317a90b89" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.078211 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j8m4b" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b298-account-create-update-wh2wv" event={"ID":"d781b010-be2e-465d-9789-d6188ac5a30e","Type":"ContainerDied","Data":"ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079416 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8b546c66c977997ef40cbbd237c00f88b1d5c8de3f9b7919f873c4bd98119c" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.079476 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-wh2wv" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087876 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerID="4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a" exitCode=0 Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerDied","Data":"4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.087958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerStarted","Data":"cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-wmp66" event={"ID":"b290d702-774e-48b8-a243-5a9c648740a7","Type":"ContainerDied","Data":"6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7"} Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090572 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e6d73f6df79b75ff6c574af7f852d64ccd80a310be4316368c220990db295a7" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.090638 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-wmp66" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.624742 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.632025 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mxq6b"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.709892 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21" path="/var/lib/kubelet/pods/5a5ba110-ecad-46c8-8fc2-5dc5b3efaa21/volumes" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710611 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710859 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710876 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710888 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710894 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710915 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: E0221 07:04:49.710934 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.710940 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711102 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711112 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" containerName="mariadb-database-create" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711124 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b290d702-774e-48b8-a243-5a9c648740a7" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711135 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" containerName="mariadb-account-create-update" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.711708 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.714065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.824808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.824956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.927486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.927548 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.928790 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:49 crc kubenswrapper[4820]: I0221 07:04:49.951613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"root-account-create-update-n8n84\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.032881 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.102636 4820 generic.go:334] "Generic (PLEG): container finished" podID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerID="ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2" exitCode=0 Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.103062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerDied","Data":"ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2"} Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.506721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.541998 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.574551 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641276 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") pod \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") pod \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") pod \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\" (UID: \"e1974d89-b3a1-4cc5-b113-fb39248e5bf0\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.641649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") pod \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\" (UID: \"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce\") " Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.642265 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1974d89-b3a1-4cc5-b113-fb39248e5bf0" (UID: "e1974d89-b3a1-4cc5-b113-fb39248e5bf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.642652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" (UID: "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.647262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh" (OuterVolumeSpecName: "kube-api-access-rl9zh") pod "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" (UID: "0d0b59ad-da5f-4279-8aa4-f56bd575a5ce"). InnerVolumeSpecName "kube-api-access-rl9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.647308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc" (OuterVolumeSpecName: "kube-api-access-z8hzc") pod "e1974d89-b3a1-4cc5-b113-fb39248e5bf0" (UID: "e1974d89-b3a1-4cc5-b113-fb39248e5bf0"). InnerVolumeSpecName "kube-api-access-z8hzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743889 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743933 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743949 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8hzc\" (UniqueName: \"kubernetes.io/projected/e1974d89-b3a1-4cc5-b113-fb39248e5bf0-kube-api-access-z8hzc\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:50 crc kubenswrapper[4820]: I0221 07:04:50.743963 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl9zh\" (UniqueName: \"kubernetes.io/projected/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce-kube-api-access-rl9zh\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.132994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2x7vh" event={"ID":"e1974d89-b3a1-4cc5-b113-fb39248e5bf0","Type":"ContainerDied","Data":"cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.134226 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6ed11bf6ae181e0cc0aee7eef6db8d6cab8290a128df18d7cfd8cf3b323850" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.134196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2x7vh" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-ccc55" event={"ID":"0d0b59ad-da5f-4279-8aa4-f56bd575a5ce","Type":"ContainerDied","Data":"8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135733 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7ec790863c179b2b5b7eb2a5ebbaeb76d3f27e1b618da17f59d0bfc7013923" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.135781 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-ccc55" Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137474 4820 generic.go:334] "Generic (PLEG): container finished" podID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerID="4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488" exitCode=0 Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerDied","Data":"4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.137550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerStarted","Data":"37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810"} Feb 21 07:04:51 crc kubenswrapper[4820]: I0221 07:04:51.763067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763575 4820 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763606 4820 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 21 07:04:51 crc kubenswrapper[4820]: E0221 07:04:51.763665 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift podName:b2200daa-1861-49f4-965a-68417ec65542 nodeName:}" failed. No retries permitted until 2026-02-21 07:05:07.763644426 +0000 UTC m=+1082.796728624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift") pod "swift-storage-0" (UID: "b2200daa-1861-49f4-965a-68417ec65542") : configmap "swift-ring-files" not found Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.147648 4820 generic.go:334] "Generic (PLEG): container finished" podID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerID="fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63" exitCode=0 Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.147856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerDied","Data":"fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63"} Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.507909 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.586969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") pod \"4fc5af9d-a695-46e8-94c2-acfa134131a7\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.587029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") pod \"4fc5af9d-a695-46e8-94c2-acfa134131a7\" (UID: \"4fc5af9d-a695-46e8-94c2-acfa134131a7\") " Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.587881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fc5af9d-a695-46e8-94c2-acfa134131a7" (UID: "4fc5af9d-a695-46e8-94c2-acfa134131a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.592016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg" (OuterVolumeSpecName: "kube-api-access-49bqg") pod "4fc5af9d-a695-46e8-94c2-acfa134131a7" (UID: "4fc5af9d-a695-46e8-94c2-acfa134131a7"). InnerVolumeSpecName "kube-api-access-49bqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.688754 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bqg\" (UniqueName: \"kubernetes.io/projected/4fc5af9d-a695-46e8-94c2-acfa134131a7-kube-api-access-49bqg\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.688801 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fc5af9d-a695-46e8-94c2-acfa134131a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.922705 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923271 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923337 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923350 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: E0221 07:04:52.923378 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923587 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" containerName="mariadb-database-create" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923612 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.923623 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" containerName="mariadb-account-create-update" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.924353 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.926652 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.926931 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.931751 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.991391 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output=< Feb 21 07:04:52 crc kubenswrapper[4820]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 21 07:04:52 crc kubenswrapper[4820]: > Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:52 crc kubenswrapper[4820]: I0221 07:04:52.993719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.015422 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.058664 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.095995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.099753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.100854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.101023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.117555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"glance-db-sync-5knjn\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n8n84" event={"ID":"4fc5af9d-a695-46e8-94c2-acfa134131a7","Type":"ContainerDied","Data":"37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810"} Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159165 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37095b5f5021c170b115691a74b530a96a8b753f8dbd3bbb0142dea2a73ec810" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.159170 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n8n84" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.253140 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.286121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.287539 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.298465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.305556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.402836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403152 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.403443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.505969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.506347 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.507786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.522027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"ovn-controller-sfpp9-config-5txw6\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.574557 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.624491 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.708707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") pod \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\" (UID: \"3f798ecc-7cdf-4b7b-b8c9-0754d3391676\") " Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.709690 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.716459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.717252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8" (OuterVolumeSpecName: "kube-api-access-4crq8") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "kube-api-access-4crq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.738023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.738259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.742352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.742684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts" (OuterVolumeSpecName: "scripts") pod "3f798ecc-7cdf-4b7b-b8c9-0754d3391676" (UID: "3f798ecc-7cdf-4b7b-b8c9-0754d3391676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826105 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crq8\" (UniqueName: \"kubernetes.io/projected/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-kube-api-access-4crq8\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826153 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826170 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826181 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826295 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.826308 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f798ecc-7cdf-4b7b-b8c9-0754d3391676-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:53 crc kubenswrapper[4820]: I0221 07:04:53.868160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.088510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.168478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerStarted","Data":"9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.169360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerStarted","Data":"06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rf689" event={"ID":"3f798ecc-7cdf-4b7b-b8c9-0754d3391676","Type":"ContainerDied","Data":"c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4"} Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172617 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26d73d13c8ed1f73935a923bee354cfe61457ba1d8a1c7f380f8b963015bff4" Feb 21 07:04:54 crc kubenswrapper[4820]: I0221 07:04:54.172706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rf689" Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.181984 4820 generic.go:334] "Generic (PLEG): container finished" podID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerID="25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.182040 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerDied","Data":"25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.184809 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa49984a-9511-4449-adc6-997899961f73" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.184869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.186946 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" exitCode=0 Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.186980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} Feb 21 07:04:55 crc kubenswrapper[4820]: I0221 07:04:55.814457 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod924235f7-e875-49cd-b7c1-1cfa96515a97"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod924235f7-e875-49cd-b7c1-1cfa96515a97] : Timed out while waiting for systemd to remove kubepods-besteffort-pod924235f7_e875_49cd_b7c1_1cfa96515a97.slice" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.186656 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.194411 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-n8n84"] Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.196368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerStarted","Data":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.196661 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.200793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerStarted","Data":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.201350 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.222286 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.608229889 podStartE2EDuration="59.222268722s" podCreationTimestamp="2026-02-21 07:03:57 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.136865967 +0000 UTC m=+1029.169950155" lastFinishedPulling="2026-02-21 07:04:20.75090479 +0000 UTC m=+1035.783988988" observedRunningTime="2026-02-21 07:04:56.216314068 +0000 UTC m=+1071.249398296" watchObservedRunningTime="2026-02-21 07:04:56.222268722 +0000 UTC m=+1071.255352920" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.242632 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.137134744 podStartE2EDuration="58.242611596s" podCreationTimestamp="2026-02-21 07:03:58 +0000 UTC" firstStartedPulling="2026-02-21 07:04:14.255948344 +0000 UTC m=+1029.289032542" lastFinishedPulling="2026-02-21 07:04:21.361425196 +0000 UTC m=+1036.394509394" observedRunningTime="2026-02-21 07:04:56.241449124 +0000 UTC m=+1071.274533342" watchObservedRunningTime="2026-02-21 07:04:56.242611596 +0000 UTC m=+1071.275695794" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.498595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593474 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") pod \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\" (UID: \"6d564dd0-292f-4a24-9f18-a1e1bac56e9d\") " Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run" (OuterVolumeSpecName: "var-run") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.593972 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594386 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594455 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594511 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.594515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts" (OuterVolumeSpecName: "scripts") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.610212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj" (OuterVolumeSpecName: "kube-api-access-vkdgj") pod "6d564dd0-292f-4a24-9f18-a1e1bac56e9d" (UID: "6d564dd0-292f-4a24-9f18-a1e1bac56e9d"). InnerVolumeSpecName "kube-api-access-vkdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696354 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696388 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkdgj\" (UniqueName: \"kubernetes.io/projected/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-kube-api-access-vkdgj\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:56 crc kubenswrapper[4820]: I0221 07:04:56.696400 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d564dd0-292f-4a24-9f18-a1e1bac56e9d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-5txw6" event={"ID":"6d564dd0-292f-4a24-9f18-a1e1bac56e9d","Type":"ContainerDied","Data":"9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b"} Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209223 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed44088fdf574758e662e5ceff9a2c2ab741fead1b9e910642245686b79825b" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.209290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-5txw6" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.586648 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.602860 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9-config-5txw6"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.686983 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:57 crc kubenswrapper[4820]: E0221 07:04:57.687365 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687388 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: E0221 07:04:57.687416 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687425 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687637 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" containerName="swift-ring-rebalance" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.687662 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" containerName="ovn-config" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.688348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.698040 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.707159 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc5af9d-a695-46e8-94c2-acfa134131a7" path="/var/lib/kubelet/pods/4fc5af9d-a695-46e8-94c2-acfa134131a7/volumes" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.707747 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d564dd0-292f-4a24-9f18-a1e1bac56e9d" path="/var/lib/kubelet/pods/6d564dd0-292f-4a24-9f18-a1e1bac56e9d/volumes" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.708423 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812598 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812653 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.812718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.813059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.914905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.915780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.916879 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.930908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"ovn-controller-sfpp9-config-jwq6z\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:57 crc kubenswrapper[4820]: I0221 07:04:57.976556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sfpp9" Feb 21 07:04:58 crc kubenswrapper[4820]: I0221 07:04:58.007649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:04:58 crc kubenswrapper[4820]: I0221 07:04:58.465784 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:04:58 crc kubenswrapper[4820]: W0221 07:04:58.486576 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d4589d4_4df1_40d9_9af3_fedff5530ab1.slice/crio-3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718 WatchSource:0}: Error finding container 3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718: Status 404 returned error can't find the container with id 3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718 Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.227827 4820 generic.go:334] "Generic (PLEG): container finished" podID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerID="6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde" exitCode=0 Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.227872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerDied","Data":"6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde"} Feb 21 07:04:59 crc kubenswrapper[4820]: I0221 07:04:59.228132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerStarted","Data":"3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718"} Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.207812 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.208964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.210899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.218906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.271202 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.271437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.372603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.372763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.373475 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.418359 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"root-account-create-update-bzcnx\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:01 crc kubenswrapper[4820]: I0221 07:05:01.529173 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.459919 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554693 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554723 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run" (OuterVolumeSpecName: "var-run") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.554901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") pod \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\" (UID: \"6d4589d4-4df1-40d9-9af3-fedff5530ab1\") " Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555130 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555141 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.555856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.556398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts" (OuterVolumeSpecName: "scripts") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.559073 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh" (OuterVolumeSpecName: "kube-api-access-vc8kh") pod "6d4589d4-4df1-40d9-9af3-fedff5530ab1" (UID: "6d4589d4-4df1-40d9-9af3-fedff5530ab1"). InnerVolumeSpecName "kube-api-access-vc8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656856 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656895 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d4589d4-4df1-40d9-9af3-fedff5530ab1-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656904 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc8kh\" (UniqueName: \"kubernetes.io/projected/6d4589d4-4df1-40d9-9af3-fedff5530ab1-kube-api-access-vc8kh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.656914 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d4589d4-4df1-40d9-9af3-fedff5530ab1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.790699 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:05:05 crc kubenswrapper[4820]: W0221 07:05:05.801181 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df65e7d_3ade_4585_9f5f_7a4b7c8bc8eb.slice/crio-a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593 WatchSource:0}: Error finding container a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593: Status 404 returned error can't find the container with id a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593 Feb 21 07:05:05 crc kubenswrapper[4820]: I0221 07:05:05.805717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.287851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9-config-jwq6z" event={"ID":"6d4589d4-4df1-40d9-9af3-fedff5530ab1","Type":"ContainerDied","Data":"3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.288121 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2fd5db2417e14af88923f56274a3e97684fc0a3371a7473a1f9625fa0c1718" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.287886 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9-config-jwq6z" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289894 4820 generic.go:334] "Generic (PLEG): container finished" podID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerID="bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8" exitCode=0 Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerDied","Data":"bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.289974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerStarted","Data":"a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.292149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerStarted","Data":"3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8"} Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.322880 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5knjn" podStartSLOduration=2.687514647 podStartE2EDuration="14.322859035s" podCreationTimestamp="2026-02-21 07:04:52 +0000 UTC" firstStartedPulling="2026-02-21 07:04:53.84704097 +0000 UTC m=+1068.880125168" lastFinishedPulling="2026-02-21 07:05:05.482385358 +0000 UTC m=+1080.515469556" observedRunningTime="2026-02-21 07:05:06.318836495 +0000 UTC m=+1081.351920713" watchObservedRunningTime="2026-02-21 07:05:06.322859035 +0000 UTC m=+1081.355943243" Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.536417 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:05:06 crc kubenswrapper[4820]: I0221 07:05:06.545053 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9-config-jwq6z"] Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.629750 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.690758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") pod \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.690888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") pod \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\" (UID: \"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb\") " Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.691674 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" (UID: "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.696568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk" (OuterVolumeSpecName: "kube-api-access-m6rqk") pod "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" (UID: "6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb"). InnerVolumeSpecName "kube-api-access-m6rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.708612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" path="/var/lib/kubelet/pods/6d4589d4-4df1-40d9-9af3-fedff5530ab1/volumes" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792618 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rqk\" (UniqueName: \"kubernetes.io/projected/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-kube-api-access-m6rqk\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.792631 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.798558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"swift-storage-0\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " pod="openstack/swift-storage-0" Feb 21 07:05:07 crc kubenswrapper[4820]: I0221 07:05:07.811657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bzcnx" event={"ID":"6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb","Type":"ContainerDied","Data":"a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593"} Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311943 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20eaeb10dd06f76f2c1f0f46a91b2a3402460b150aa71a9d120a8dfed9dc593" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.311801 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bzcnx" Feb 21 07:05:08 crc kubenswrapper[4820]: I0221 07:05:08.323325 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:05:08 crc kubenswrapper[4820]: W0221 07:05:08.325885 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2200daa_1861_49f4_965a_68417ec65542.slice/crio-0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f WatchSource:0}: Error finding container 0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f: Status 404 returned error can't find the container with id 0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.325189 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f"} Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.347420 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:05:09 crc kubenswrapper[4820]: I0221 07:05:09.646809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.335922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} Feb 21 07:05:10 crc kubenswrapper[4820]: I0221 07:05:10.336354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165193 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:11 crc kubenswrapper[4820]: E0221 07:05:11.165601 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165626 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: E0221 07:05:11.165665 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165673 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165832 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4589d4-4df1-40d9-9af3-fedff5530ab1" containerName="ovn-config" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.165849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" containerName="mariadb-account-create-update" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.166471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.189155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.281248 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.282222 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.285288 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.291985 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.349359 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.349636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.366651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.367892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.383436 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.384579 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.391654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.400167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.408127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.446452 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.447483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.449551 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450264 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.450723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.451816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.452530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.465232 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.470516 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.471567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.484960 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.502383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"cinder-db-create-jng5b\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553514 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.553777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.554407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.569266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"cinder-6976-account-create-update-mzpt2\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.600226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655310 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.655976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656070 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.656692 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.658934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.659693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.676494 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"barbican-4e9a-account-create-update-55xqx\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.689813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"keystone-db-sync-68q2w\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.705355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"barbican-db-create-w9fxb\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.705366 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.706267 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.707284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.715534 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.723553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.758111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.758181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.759135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.771144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.784400 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.791864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"neutron-db-create-lnssq\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.828602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.869355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.869438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.971747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.971811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.972584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:11 crc kubenswrapper[4820]: I0221 07:05:11.996377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.001014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"neutron-c516-account-create-update-mxhpl\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.105053 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.238668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.350490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerStarted","Data":"83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de"} Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.395979 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.398185 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26b24bc_e904_49a1_b2bc_d140b0032b83.slice/crio-7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545 WatchSource:0}: Error finding container 7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545: Status 404 returned error can't find the container with id 7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545 Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.504304 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.520218 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8a463c_63a8_424f_a3ab_4e46390b8cca.slice/crio-d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a WatchSource:0}: Error finding container d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a: Status 404 returned error can't find the container with id d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.527131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.527652 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9dd869_f673_4077_b345_05b4e79eb590.slice/crio-ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032 WatchSource:0}: Error finding container ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032: Status 404 returned error can't find the container with id ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032 Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.541200 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.547949 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.553249 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8063e5a_6b15_4855_9ae2_5fdcc912b472.slice/crio-ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f WatchSource:0}: Error finding container ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f: Status 404 returned error can't find the container with id ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f Feb 21 07:05:12 crc kubenswrapper[4820]: I0221 07:05:12.673939 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:05:12 crc kubenswrapper[4820]: W0221 07:05:12.684699 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d96043_ca9d_4dd0_aa3e_8bcd5941a97b.slice/crio-7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed WatchSource:0}: Error finding container 7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed: Status 404 returned error can't find the container with id 7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.359360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerStarted","Data":"7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.360924 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerStarted","Data":"1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.361889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerStarted","Data":"ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.362836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerStarted","Data":"ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.363892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerStarted","Data":"7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed"} Feb 21 07:05:13 crc kubenswrapper[4820]: I0221 07:05:13.364811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerStarted","Data":"d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.377429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerStarted","Data":"bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.380751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerStarted","Data":"b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.383621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerStarted","Data":"9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.385818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerStarted","Data":"03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.387693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerStarted","Data":"b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.392785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerStarted","Data":"2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0"} Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.398415 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c516-account-create-update-mxhpl" podStartSLOduration=3.398398265 podStartE2EDuration="3.398398265s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.392869524 +0000 UTC m=+1089.425953732" watchObservedRunningTime="2026-02-21 07:05:14.398398265 +0000 UTC m=+1089.431482463" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.415813 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lnssq" podStartSLOduration=3.41579041 podStartE2EDuration="3.41579041s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.412108739 +0000 UTC m=+1089.445192937" watchObservedRunningTime="2026-02-21 07:05:14.41579041 +0000 UTC m=+1089.448874608" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.427864 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-jng5b" podStartSLOduration=3.427847148 podStartE2EDuration="3.427847148s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.426335707 +0000 UTC m=+1089.459419905" watchObservedRunningTime="2026-02-21 07:05:14.427847148 +0000 UTC m=+1089.460931346" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.444793 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4e9a-account-create-update-55xqx" podStartSLOduration=3.444767539 podStartE2EDuration="3.444767539s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.437129132 +0000 UTC m=+1089.470213330" watchObservedRunningTime="2026-02-21 07:05:14.444767539 +0000 UTC m=+1089.477851737" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.459474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6976-account-create-update-mzpt2" podStartSLOduration=3.45945045 podStartE2EDuration="3.45945045s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.452670865 +0000 UTC m=+1089.485755063" watchObservedRunningTime="2026-02-21 07:05:14.45945045 +0000 UTC m=+1089.492534648" Feb 21 07:05:14 crc kubenswrapper[4820]: I0221 07:05:14.475830 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-w9fxb" podStartSLOduration=3.475807715 podStartE2EDuration="3.475807715s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:14.470698697 +0000 UTC m=+1089.503782895" watchObservedRunningTime="2026-02-21 07:05:14.475807715 +0000 UTC m=+1089.508891913" Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.402547 4820 generic.go:334] "Generic (PLEG): container finished" podID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerID="9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.402766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerDied","Data":"9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.413264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.419196 4820 generic.go:334] "Generic (PLEG): container finished" podID="d69a9369-affe-4441-bf33-3c0f13540875" containerID="03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.419286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerDied","Data":"03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.431446 4820 generic.go:334] "Generic (PLEG): container finished" podID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerID="b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.431653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerDied","Data":"b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.437103 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b9dd869-f673-4077-b345-05b4e79eb590" containerID="2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.437185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerDied","Data":"2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.439594 4820 generic.go:334] "Generic (PLEG): container finished" podID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerID="bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.439653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerDied","Data":"bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1"} Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.440620 4820 generic.go:334] "Generic (PLEG): container finished" podID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerID="b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60" exitCode=0 Feb 21 07:05:15 crc kubenswrapper[4820]: I0221 07:05:15.440650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerDied","Data":"b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60"} Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.454869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.456927 4820 generic.go:334] "Generic (PLEG): container finished" podID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerID="3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8" exitCode=0 Feb 21 07:05:16 crc kubenswrapper[4820]: I0221 07:05:16.457015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerDied","Data":"3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8"} Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.830319 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.837220 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") pod \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") pod \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\" (UID: \"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") pod \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.996515 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") pod \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\" (UID: \"b8063e5a-6b15-4855-9ae2-5fdcc912b472\") " Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.997142 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" (UID: "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:18 crc kubenswrapper[4820]: I0221 07:05:18.997247 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8063e5a-6b15-4855-9ae2-5fdcc912b472" (UID: "b8063e5a-6b15-4855-9ae2-5fdcc912b472"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.002285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh" (OuterVolumeSpecName: "kube-api-access-cqphh") pod "b8063e5a-6b15-4855-9ae2-5fdcc912b472" (UID: "b8063e5a-6b15-4855-9ae2-5fdcc912b472"). InnerVolumeSpecName "kube-api-access-cqphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.008456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8" (OuterVolumeSpecName: "kube-api-access-hrds8") pod "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" (UID: "3b1b4a37-bb80-4c59-acdc-b6490c6e6c44"). InnerVolumeSpecName "kube-api-access-hrds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrds8\" (UniqueName: \"kubernetes.io/projected/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-kube-api-access-hrds8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098261 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqphh\" (UniqueName: \"kubernetes.io/projected/b8063e5a-6b15-4855-9ae2-5fdcc912b472-kube-api-access-cqphh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098273 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8063e5a-6b15-4855-9ae2-5fdcc912b472-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.098281 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.131151 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.142549 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.158331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.174180 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.203862 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.300937 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") pod \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") pod \"d69a9369-affe-4441-bf33-3c0f13540875\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") pod \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301205 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") pod \"d69a9369-affe-4441-bf33-3c0f13540875\" (UID: \"d69a9369-affe-4441-bf33-3c0f13540875\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301229 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301268 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") pod \"4b9dd869-f673-4077-b345-05b4e79eb590\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") pod \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\" (UID: \"8e8a463c-63a8-424f-a3ab-4e46390b8cca\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301522 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") pod \"4b9dd869-f673-4077-b345-05b4e79eb590\" (UID: \"4b9dd869-f673-4077-b345-05b4e79eb590\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301713 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e8a463c-63a8-424f-a3ab-4e46390b8cca" (UID: "8e8a463c-63a8-424f-a3ab-4e46390b8cca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.301760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") pod \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\" (UID: \"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") pod \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\" (UID: \"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc\") " Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302223 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" (UID: "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302553 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8a463c-63a8-424f-a3ab-4e46390b8cca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302578 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.302945 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b9dd869-f673-4077-b345-05b4e79eb590" (UID: "4b9dd869-f673-4077-b345-05b4e79eb590"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.303098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d69a9369-affe-4441-bf33-3c0f13540875" (UID: "d69a9369-affe-4441-bf33-3c0f13540875"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.304366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6" (OuterVolumeSpecName: "kube-api-access-rptw6") pod "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" (UID: "10d96043-ca9d-4dd0-aa3e-8bcd5941a97b"). InnerVolumeSpecName "kube-api-access-rptw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq" (OuterVolumeSpecName: "kube-api-access-fbwdq") pod "d69a9369-affe-4441-bf33-3c0f13540875" (UID: "d69a9369-affe-4441-bf33-3c0f13540875"). InnerVolumeSpecName "kube-api-access-fbwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq" (OuterVolumeSpecName: "kube-api-access-ccfxq") pod "8e8a463c-63a8-424f-a3ab-4e46390b8cca" (UID: "8e8a463c-63a8-424f-a3ab-4e46390b8cca"). InnerVolumeSpecName "kube-api-access-ccfxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.307725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5" (OuterVolumeSpecName: "kube-api-access-2jsg5") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "kube-api-access-2jsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.309575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh" (OuterVolumeSpecName: "kube-api-access-sg8hh") pod "4b9dd869-f673-4077-b345-05b4e79eb590" (UID: "4b9dd869-f673-4077-b345-05b4e79eb590"). InnerVolumeSpecName "kube-api-access-sg8hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.327905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.344801 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data" (OuterVolumeSpecName: "config-data") pod "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" (UID: "e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404007 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69a9369-affe-4441-bf33-3c0f13540875-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404053 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404069 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwdq\" (UniqueName: \"kubernetes.io/projected/d69a9369-affe-4441-bf33-3c0f13540875-kube-api-access-fbwdq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404104 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9dd869-f673-4077-b345-05b4e79eb590-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404119 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfxq\" (UniqueName: \"kubernetes.io/projected/8e8a463c-63a8-424f-a3ab-4e46390b8cca-kube-api-access-ccfxq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404145 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg8hh\" (UniqueName: \"kubernetes.io/projected/4b9dd869-f673-4077-b345-05b4e79eb590-kube-api-access-sg8hh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404172 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jsg5\" (UniqueName: \"kubernetes.io/projected/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc-kube-api-access-2jsg5\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.404183 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rptw6\" (UniqueName: \"kubernetes.io/projected/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b-kube-api-access-rptw6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jng5b" event={"ID":"4b9dd869-f673-4077-b345-05b4e79eb590","Type":"ContainerDied","Data":"ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jng5b" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.481775 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad651295e6790de81b4d37f5e6c8931c1d983fa719ba0c2e89abe917c16fa032" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-55xqx" event={"ID":"8e8a463c-63a8-424f-a3ab-4e46390b8cca","Type":"ContainerDied","Data":"d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483050 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22b9aa89457fa4b9e71c82c996799b20fb0531c15a153ed6e3fab6ce7936c6a" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.483115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-55xqx" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490047 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6976-account-create-update-mzpt2" event={"ID":"d69a9369-affe-4441-bf33-3c0f13540875","Type":"ContainerDied","Data":"83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490089 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83aa7c0bc22c00737da59e958e3d2d3b4c976a72a172bae6bb7e159573f091de" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.490152 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6976-account-create-update-mzpt2" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.497510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerStarted","Data":"ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-mxhpl" event={"ID":"b8063e5a-6b15-4855-9ae2-5fdcc912b472","Type":"ContainerDied","Data":"ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499893 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5ebc3b873484840c7f045ff493a6f409c31830e26105733a9f04a8cdc8523f" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.499937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-mxhpl" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.505964 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w9fxb" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.505996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w9fxb" event={"ID":"10d96043-ca9d-4dd0-aa3e-8bcd5941a97b","Type":"ContainerDied","Data":"7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.506035 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc6a983dfe97f711ff91ce8ce842ed3ee41235e0ed37ea18dd4c431b78873ed" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507905 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5knjn" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5knjn" event={"ID":"e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc","Type":"ContainerDied","Data":"06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.507948 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c7c445d64ced196c5da3af11e304c1072522569a7cfbf0d406157ab3cc8687" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.516663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.516706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnssq" event={"ID":"3b1b4a37-bb80-4c59-acdc-b6490c6e6c44","Type":"ContainerDied","Data":"1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7"} Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518541 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbc501023d995cbd34384639ed9732eac99f54df56741ae8eaa92c36c83d1f7" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.518583 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnssq" Feb 21 07:05:19 crc kubenswrapper[4820]: I0221 07:05:19.538871 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-68q2w" podStartSLOduration=1.8546603560000001 podStartE2EDuration="8.538850811s" podCreationTimestamp="2026-02-21 07:05:11 +0000 UTC" firstStartedPulling="2026-02-21 07:05:12.400556344 +0000 UTC m=+1087.433640542" lastFinishedPulling="2026-02-21 07:05:19.084746789 +0000 UTC m=+1094.117830997" observedRunningTime="2026-02-21 07:05:19.515396751 +0000 UTC m=+1094.548480969" watchObservedRunningTime="2026-02-21 07:05:19.538850811 +0000 UTC m=+1094.571935009" Feb 21 07:05:19 crc kubenswrapper[4820]: E0221 07:05:19.547015 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9dd869_f673_4077_b345_05b4e79eb590.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69a9369_affe_4441_bf33_3c0f13540875.slice\": RecentStats: unable to find data in memory cache]" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.532606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613560 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613891 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613908 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613916 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613923 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613938 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613944 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613961 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613974 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.613989 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.613996 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: E0221 07:05:20.614004 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614011 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614144 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614153 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" containerName="glance-db-sync" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614161 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a9369-affe-4441-bf33-3c0f13540875" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614172 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614186 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614195 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" containerName="mariadb-account-create-update" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.614207 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" containerName="mariadb-database-create" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.615012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.646316 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.731293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832648 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.832807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.833785 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.834331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.853434 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"dnsmasq-dns-74cc88677c-sfwv9\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:20 crc kubenswrapper[4820]: I0221 07:05:20.940610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.416915 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:21 crc kubenswrapper[4820]: W0221 07:05:21.423756 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bf146cf_2a59_4e4c_8b3b_cd34b40ac463.slice/crio-9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4 WatchSource:0}: Error finding container 9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4: Status 404 returned error can't find the container with id 9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4 Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.540260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerStarted","Data":"9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.545680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.545825 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerStarted","Data":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.591038 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.835195834 podStartE2EDuration="47.591018943s" podCreationTimestamp="2026-02-21 07:04:34 +0000 UTC" firstStartedPulling="2026-02-21 07:05:08.328386846 +0000 UTC m=+1083.361471044" lastFinishedPulling="2026-02-21 07:05:19.084209955 +0000 UTC m=+1094.117294153" observedRunningTime="2026-02-21 07:05:21.579830538 +0000 UTC m=+1096.612914756" watchObservedRunningTime="2026-02-21 07:05:21.591018943 +0000 UTC m=+1096.624103141" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.879155 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.936197 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.937497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.939259 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 21 07:05:21 crc kubenswrapper[4820]: I0221 07:05:21.955374 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057275 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.057389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.158570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.159615 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.159709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160371 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.160467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.178171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"dnsmasq-dns-69577ff67f-ngmlj\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.280426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.555034 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" exitCode=0 Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.555126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1"} Feb 21 07:05:22 crc kubenswrapper[4820]: W0221 07:05:22.698983 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf122b6d8_f1d8_49ff_9056_d1c1cfd1ff5f.slice/crio-a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017 WatchSource:0}: Error finding container a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017: Status 404 returned error can't find the container with id a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017 Feb 21 07:05:22 crc kubenswrapper[4820]: I0221 07:05:22.707631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.563986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerStarted","Data":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.564479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" containerID="cri-o://f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" gracePeriod=10 Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.564776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566554 4820 generic.go:334] "Generic (PLEG): container finished" podID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerID="04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85" exitCode=0 Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.566867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerStarted","Data":"a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017"} Feb 21 07:05:23 crc kubenswrapper[4820]: I0221 07:05:23.596369 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" podStartSLOduration=3.596347369 podStartE2EDuration="3.596347369s" podCreationTimestamp="2026-02-21 07:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:23.588529056 +0000 UTC m=+1098.621613274" watchObservedRunningTime="2026-02-21 07:05:23.596347369 +0000 UTC m=+1098.629431567" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.044119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.191982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.192014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.192029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") pod \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\" (UID: \"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463\") " Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.196407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh" (OuterVolumeSpecName: "kube-api-access-tb8rh") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "kube-api-access-tb8rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.230163 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.232311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.233700 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.238264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config" (OuterVolumeSpecName: "config") pod "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" (UID: "4bf146cf-2a59-4e4c-8b3b-cd34b40ac463"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294750 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294963 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294981 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.294993 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8rh\" (UniqueName: \"kubernetes.io/projected/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-kube-api-access-tb8rh\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.295010 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.575703 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerID="ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d" exitCode=0 Feb 21 07:05:24 crc kubenswrapper[4820]: I0221 07:05:24.575788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerDied","Data":"ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.578903 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerStarted","Data":"e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.579007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580617 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" exitCode=0 Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" event={"ID":"4bf146cf-2a59-4e4c-8b3b-cd34b40ac463","Type":"ContainerDied","Data":"9fd3d172009945d2420a3030e3c7ac306b91895bf7a590581764b1194d1230c4"} Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580658 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-sfwv9" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.580688 4820 scope.go:117] "RemoveContainer" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.616136 4820 scope.go:117] "RemoveContainer" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.630525 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" podStartSLOduration=3.630494215 podStartE2EDuration="3.630494215s" podCreationTimestamp="2026-02-21 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:24.612569306 +0000 UTC m=+1099.645653504" watchObservedRunningTime="2026-02-21 07:05:24.630494215 +0000 UTC m=+1099.663578413" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.642982 4820 scope.go:117] "RemoveContainer" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: E0221 07:05:24.643542 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": container with ID starting with f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755 not found: ID does not exist" containerID="f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643595 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755"} err="failed to get container status \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": rpc error: code = NotFound desc = could not find container \"f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755\": container with ID starting with f927d6cbc9f4bcd4f959bf20e5d0b5fc297d3c90fcf3c1856c69141fc48e4755 not found: ID does not exist" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643627 4820 scope.go:117] "RemoveContainer" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.643989 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:25 crc kubenswrapper[4820]: E0221 07:05:24.644076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": container with ID starting with 2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1 not found: ID does not exist" containerID="2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.644106 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1"} err="failed to get container status \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": rpc error: code = NotFound desc = could not find container \"2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1\": container with ID starting with 2c1feb17a0086f19d43de35916deafc1035448dc67193c2522918d9f40c698d1 not found: ID does not exist" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:24.654069 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-sfwv9"] Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:25.710412 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" path="/var/lib/kubelet/pods/4bf146cf-2a59-4e4c-8b3b-cd34b40ac463/volumes" Feb 21 07:05:25 crc kubenswrapper[4820]: I0221 07:05:25.913833 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.022963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.023028 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.023078 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") pod \"f26b24bc-e904-49a1-b2bc-d140b0032b83\" (UID: \"f26b24bc-e904-49a1-b2bc-d140b0032b83\") " Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.028675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8" (OuterVolumeSpecName: "kube-api-access-fllw8") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "kube-api-access-fllw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.045307 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.067136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data" (OuterVolumeSpecName: "config-data") pod "f26b24bc-e904-49a1-b2bc-d140b0032b83" (UID: "f26b24bc-e904-49a1-b2bc-d140b0032b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125637 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125681 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fllw8\" (UniqueName: \"kubernetes.io/projected/f26b24bc-e904-49a1-b2bc-d140b0032b83-kube-api-access-fllw8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.125696 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b24bc-e904-49a1-b2bc-d140b0032b83-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-68q2w" event={"ID":"f26b24bc-e904-49a1-b2bc-d140b0032b83","Type":"ContainerDied","Data":"7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545"} Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607920 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e24aafd254628e994215fa2904318870dcc9a4e70c8b809fc232e12c6faf545" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.607937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-68q2w" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.854433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.855021 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" containerID="cri-o://e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" gracePeriod=10 Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.895478 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.895955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.895973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.895994 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896002 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: E0221 07:05:26.896029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="init" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896037 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="init" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896277 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" containerName="keystone-db-sync" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.896307 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf146cf-2a59-4e4c-8b3b-cd34b40ac463" containerName="dnsmasq-dns" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.897019 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901615 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901755 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.901850 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.908769 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.909979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.911679 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.925394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.938860 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941675 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:26 crc kubenswrapper[4820]: I0221 07:05:26.941903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043454 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043713 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.043989 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044386 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044628 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.044718 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.045481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.047461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.048287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.048853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.053889 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.058827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.059034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.059069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.066212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.071940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"dnsmasq-dns-84f6cc7f47-bh6wx\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.072412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.101101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"keystone-bootstrap-clkkr\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.149448 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.150668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.154068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.154328 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.155064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mmvl6" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.182624 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.183648 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.189730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.189946 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfdgf" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.194267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.234308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.248388 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249409 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.249534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.252520 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zjnng" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.253021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.263348 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.264457 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.272033 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277089 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p47r7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277358 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277897 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.277982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.280961 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.314744 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.334983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353428 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353518 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353620 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.353724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.362508 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.363700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.364047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.364839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.365064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.371912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.373861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.377917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"cinder-db-sync-vfn4b\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.377994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.388013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"neutron-db-sync-lj8d2\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.422984 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.431162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.433696 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.433935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.445376 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.455924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456117 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456139 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456218 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456372 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.456467 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.461025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.463759 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.464064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.468830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.472891 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.472918 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.474199 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.474487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.481107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"barbican-db-sync-smnkd\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.481141 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"placement-db-sync-wdvf7\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.500513 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.540689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.541694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.559284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.563914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.565858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.574979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.576875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.577788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.581697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.582040 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.591440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"ceilometer-0\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.600711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.654031 4820 generic.go:334] "Generic (PLEG): container finished" podID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerID="e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" exitCode=0 Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.654067 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83"} Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.667811 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.694354 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.760747 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.773946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.774128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.775954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.776963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.777971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.778591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.781872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.782420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.830809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"dnsmasq-dns-68bc8f6695-dvmcz\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:27 crc kubenswrapper[4820]: W0221 07:05:27.834787 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52535b6c_a2fa_41da_aeea_143da861244d.slice/crio-8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1 WatchSource:0}: Error finding container 8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1: Status 404 returned error can't find the container with id 8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1 Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.876608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.876977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.877137 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") pod \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\" (UID: \"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f\") " Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.897101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw" (OuterVolumeSpecName: "kube-api-access-9gjfw") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "kube-api-access-9gjfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.952981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.964381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.977450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.980210 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gjfw\" (UniqueName: \"kubernetes.io/projected/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-kube-api-access-9gjfw\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982019 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982102 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.982158 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:27 crc kubenswrapper[4820]: I0221 07:05:27.984211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config" (OuterVolumeSpecName: "config") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.009689 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: E0221 07:05:28.010219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="init" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010277 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="init" Feb 21 07:05:28 crc kubenswrapper[4820]: E0221 07:05:28.010291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.010789 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" containerName="dnsmasq-dns" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.011917 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014010 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014226 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" (UID: "f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014912 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.014953 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.061704 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.079182 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.080795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.085368 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.085676 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.092465 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.093306 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.093341 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.105107 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.109712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.195681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196040 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196076 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196099 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.196481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.297983 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298111 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298270 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.298776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.299163 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.300206 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.301340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.303733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.306229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.306551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.307832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.308570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.323342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.323866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.324095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.324356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.347538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.349968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.459229 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.473025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.488965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:05:28 crc kubenswrapper[4820]: W0221 07:05:28.507407 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb400c916_2ba9_4d7e_b9f5_6044605f279c.slice/crio-8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24 WatchSource:0}: Error finding container 8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24: Status 404 returned error can't find the container with id 8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24 Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.509805 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.548720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.590450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.623569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.682857 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710194 4820 generic.go:334] "Generic (PLEG): container finished" podID="52535b6c-a2fa-41da-aeea-143da861244d" containerID="7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d" exitCode=0 Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerDied","Data":"7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.710307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerStarted","Data":"8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.771487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerStarted","Data":"17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" event={"ID":"f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f","Type":"ContainerDied","Data":"a723fddcd9d1993fd8a4e14435b41cd1df614738a2273b0336bdb8655a15c017"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825679 4820 scope.go:117] "RemoveContainer" containerID="e5f2c82bb99b70af0764eb67db8fcabcdc99c272f220452df12430ff599ced83" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.825859 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69577ff67f-ngmlj" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.889868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerStarted","Data":"cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.931053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerStarted","Data":"8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.938862 4820 scope.go:117] "RemoveContainer" containerID="04ebef274a53b73c45ecce50762acad1cf35b7e93a1502414bf9f32f7aa71d85" Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.941089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerStarted","Data":"1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.941138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerStarted","Data":"0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.967480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerStarted","Data":"ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca"} Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.970251 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:28 crc kubenswrapper[4820]: I0221 07:05:28.974622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"0f8176927ad01d0eb54f7e8ca55f1bbe340ac767367622047b311589a963df40"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.003314 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69577ff67f-ngmlj"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.013156 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-clkkr" podStartSLOduration=3.013129818 podStartE2EDuration="3.013129818s" podCreationTimestamp="2026-02-21 07:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:28.987676434 +0000 UTC m=+1104.020760632" watchObservedRunningTime="2026-02-21 07:05:29.013129818 +0000 UTC m=+1104.046214016" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.034025 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lj8d2" podStartSLOduration=2.034001747 podStartE2EDuration="2.034001747s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:29.007119015 +0000 UTC m=+1104.040203213" watchObservedRunningTime="2026-02-21 07:05:29.034001747 +0000 UTC m=+1104.067085945" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.291867 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.384626 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.466035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: W0221 07:05:29.489499 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod461dc704_1698_4a81_bb65_4009ee43495d.slice/crio-d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d WatchSource:0}: Error finding container d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d: Status 404 returned error can't find the container with id d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.553530 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561566 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561739 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.561798 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.562023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") pod \"52535b6c-a2fa-41da-aeea-143da861244d\" (UID: \"52535b6c-a2fa-41da-aeea-143da861244d\") " Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.574822 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8" (OuterVolumeSpecName: "kube-api-access-8vhx8") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "kube-api-access-8vhx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.599084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.602465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config" (OuterVolumeSpecName: "config") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.639692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.641171 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.649871 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52535b6c-a2fa-41da-aeea-143da861244d" (UID: "52535b6c-a2fa-41da-aeea-143da861244d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.655383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667005 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667051 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vhx8\" (UniqueName: \"kubernetes.io/projected/52535b6c-a2fa-41da-aeea-143da861244d-kube-api-access-8vhx8\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667064 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667079 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667094 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.667105 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52535b6c-a2fa-41da-aeea-143da861244d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.670164 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.711088 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f" path="/var/lib/kubelet/pods/f122b6d8-f1d8-49ff-9056-d1c1cfd1ff5f/volumes" Feb 21 07:05:29 crc kubenswrapper[4820]: E0221 07:05:29.881913 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52535b6c_a2fa_41da_aeea_143da861244d.slice\": RecentStats: unable to find data in memory cache]" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991226 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" event={"ID":"52535b6c-a2fa-41da-aeea-143da861244d","Type":"ContainerDied","Data":"8a7c8e6f8a819fc83f3cf159fcee59f34fa83470772d092880bf4f279ed7b7c1"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991303 4820 scope.go:117] "RemoveContainer" containerID="7c1847cdd2d5d8bbe97a6ee50c2aa4639920b7e7798a71e10c2f9806b4f4e40d" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.991427 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f6cc7f47-bh6wx" Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997062 4820 generic.go:334] "Generic (PLEG): container finished" podID="375bfff4-76af-4f71-a665-c409feeb6f67" containerID="18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65" exitCode=0 Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65"} Feb 21 07:05:29 crc kubenswrapper[4820]: I0221 07:05:29.997141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerStarted","Data":"0deee66ca0c914e04051643e2ef7f61bf67d60020463554eb611d4a4dbdb4fc8"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.007430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"7ac42339ffb42ecc0717cb27e6d9608813dcb6377518f31c7fcea3928ee2ca43"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.030692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.043504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerStarted","Data":"52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3"} Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.134878 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:30 crc kubenswrapper[4820]: I0221 07:05:30.143673 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84f6cc7f47-bh6wx"] Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.057734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.063069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.074015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerStarted","Data":"fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173"} Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.074296 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.096696 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podStartSLOduration=4.096681355 podStartE2EDuration="4.096681355s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:31.096016227 +0000 UTC m=+1106.129100425" watchObservedRunningTime="2026-02-21 07:05:31.096681355 +0000 UTC m=+1106.129765553" Feb 21 07:05:31 crc kubenswrapper[4820]: I0221 07:05:31.708196 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52535b6c-a2fa-41da-aeea-143da861244d" path="/var/lib/kubelet/pods/52535b6c-a2fa-41da-aeea-143da861244d/volumes" Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.094891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerStarted","Data":"8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae"} Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.095067 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" containerID="cri-o://41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.095107 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" containerID="cri-o://8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerStarted","Data":"45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd"} Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100633 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" containerID="cri-o://b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.100779 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" containerID="cri-o://45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" gracePeriod=30 Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.152725 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.152701162 podStartE2EDuration="5.152701162s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:32.148143767 +0000 UTC m=+1107.181227965" watchObservedRunningTime="2026-02-21 07:05:32.152701162 +0000 UTC m=+1107.185785360" Feb 21 07:05:32 crc kubenswrapper[4820]: I0221 07:05:32.158266 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.158227202 podStartE2EDuration="6.158227202s" podCreationTimestamp="2026-02-21 07:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:32.12076709 +0000 UTC m=+1107.153851288" watchObservedRunningTime="2026-02-21 07:05:32.158227202 +0000 UTC m=+1107.191311390" Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112729 4820 generic.go:334] "Generic (PLEG): container finished" podID="461dc704-1698-4a81-bb65-4009ee43495d" containerID="45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" exitCode=0 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112762 4820 generic.go:334] "Generic (PLEG): container finished" podID="461dc704-1698-4a81-bb65-4009ee43495d" containerID="b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" exitCode=143 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.112844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.113682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122476 4820 generic.go:334] "Generic (PLEG): container finished" podID="316968d3-d62b-4a31-b157-02f4a33cd175" containerID="8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" exitCode=0 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122518 4820 generic.go:334] "Generic (PLEG): container finished" podID="316968d3-d62b-4a31-b157-02f4a33cd175" containerID="41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" exitCode=143 Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae"} Feb 21 07:05:33 crc kubenswrapper[4820]: I0221 07:05:33.122586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f"} Feb 21 07:05:34 crc kubenswrapper[4820]: I0221 07:05:34.150264 4820 generic.go:334] "Generic (PLEG): container finished" podID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerID="1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b" exitCode=0 Feb 21 07:05:34 crc kubenswrapper[4820]: I0221 07:05:34.150329 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerDied","Data":"1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b"} Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.547617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714801 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.714969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.715008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") pod \"aece5dfd-5954-404f-b713-4fe36b649ce9\" (UID: \"aece5dfd-5954-404f-b713-4fe36b649ce9\") " Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.720411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.720751 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts" (OuterVolumeSpecName: "scripts") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.721293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2" (OuterVolumeSpecName: "kube-api-access-z4ts2") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "kube-api-access-z4ts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.728495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.743030 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.744590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data" (OuterVolumeSpecName: "config-data") pod "aece5dfd-5954-404f-b713-4fe36b649ce9" (UID: "aece5dfd-5954-404f-b713-4fe36b649ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817339 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817765 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817826 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817887 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ts2\" (UniqueName: \"kubernetes.io/projected/aece5dfd-5954-404f-b713-4fe36b649ce9-kube-api-access-z4ts2\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817943 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:37 crc kubenswrapper[4820]: I0221 07:05:37.817997 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aece5dfd-5954-404f-b713-4fe36b649ce9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.112479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.184147 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.184411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" containerID="cri-o://768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" gracePeriod=10 Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.212908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clkkr" event={"ID":"aece5dfd-5954-404f-b713-4fe36b649ce9","Type":"ContainerDied","Data":"0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf"} Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.212956 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb67de2b1978c0e372a77b846d58a92de58425b3a806773cc6a315c6efe11bf" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.213014 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clkkr" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.644251 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.664702 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-clkkr"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731078 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:38 crc kubenswrapper[4820]: E0221 07:05:38.731805 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731826 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: E0221 07:05:38.731841 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.731849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732065 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" containerName="keystone-bootstrap" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732092 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="52535b6c-a2fa-41da-aeea-143da861244d" containerName="init" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.732875 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742518 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w79dl" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742871 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.742954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743152 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743288 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743337 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.743515 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.768175 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.844831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.852923 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.853071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.853619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.854068 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.854969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:38 crc kubenswrapper[4820]: I0221 07:05:38.869726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"keystone-bootstrap-s76l5\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.067430 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.223733 4820 generic.go:334] "Generic (PLEG): container finished" podID="97c27e55-f0a0-4253-b573-21c027992fe7" containerID="768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" exitCode=0 Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.223780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505"} Feb 21 07:05:39 crc kubenswrapper[4820]: I0221 07:05:39.706545 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aece5dfd-5954-404f-b713-4fe36b649ce9" path="/var/lib/kubelet/pods/aece5dfd-5954-404f-b713-4fe36b649ce9/volumes" Feb 21 07:05:40 crc kubenswrapper[4820]: I0221 07:05:40.082366 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 21 07:05:45 crc kubenswrapper[4820]: I0221 07:05:45.082801 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 21 07:05:47 crc kubenswrapper[4820]: E0221 07:05:47.019919 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5ac8ede62671a3b3695cf29bd3a6f124f27c93d1730f9030cc3daa05034d4af4" Feb 21 07:05:47 crc kubenswrapper[4820]: E0221 07:05:47.020352 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5ac8ede62671a3b3695cf29bd3a6f124f27c93d1730f9030cc3daa05034d4af4,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfh55dh557h5f5h5b9h654h5ddhc4hbfh5bch75h645hc9h64bh68h56dh5d6h674h55ch98h648h7dh7fh85h68h556h5bh674h55h549h69hd9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vr9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a3cce54d-5f2a-4e51-864d-03e55b50d698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.311654 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.312087 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcn9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-wdvf7_openstack(e2b995bf-93f1-4f28-a1a6-0d13ac9ca744): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:49 crc kubenswrapper[4820]: E0221 07:05:49.313308 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-wdvf7" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.409961 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36\\\"\"" pod="openstack/placement-db-sync-wdvf7" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.713763 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.714145 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2l4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vfn4b_openstack(b400c916-2ba9-4d7e-b9f5-6044605f279c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 07:05:50 crc kubenswrapper[4820]: E0221 07:05:50.715304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vfn4b" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.874571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.886513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.900858 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978807 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.978996 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.979210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") pod \"316968d3-d62b-4a31-b157-02f4a33cd175\" (UID: \"316968d3-d62b-4a31-b157-02f4a33cd175\") " Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.981014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.981041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs" (OuterVolumeSpecName: "logs") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.984053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g" (OuterVolumeSpecName: "kube-api-access-j6r6g") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "kube-api-access-j6r6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.988852 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:05:50 crc kubenswrapper[4820]: I0221 07:05:50.999652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts" (OuterVolumeSpecName: "scripts") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.013753 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.031352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data" (OuterVolumeSpecName: "config-data") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.056788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "316968d3-d62b-4a31-b157-02f4a33cd175" (UID: "316968d3-d62b-4a31-b157-02f4a33cd175"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.080837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081706 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081742 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081766 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081817 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081852 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081898 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.081995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082080 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082106 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") pod \"97c27e55-f0a0-4253-b573-21c027992fe7\" (UID: \"97c27e55-f0a0-4253-b573-21c027992fe7\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") pod \"461dc704-1698-4a81-bb65-4009ee43495d\" (UID: \"461dc704-1698-4a81-bb65-4009ee43495d\") " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082702 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082734 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082748 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6r6g\" (UniqueName: \"kubernetes.io/projected/316968d3-d62b-4a31-b157-02f4a33cd175-kube-api-access-j6r6g\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.082983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084033 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs" (OuterVolumeSpecName: "logs") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084418 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084455 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084470 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084480 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/316968d3-d62b-4a31-b157-02f4a33cd175-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.084499 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316968d3-d62b-4a31-b157-02f4a33cd175-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.085765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.089853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.090742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts" (OuterVolumeSpecName: "scripts") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.091209 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6" (OuterVolumeSpecName: "kube-api-access-sf6x6") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "kube-api-access-sf6x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.093424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6" (OuterVolumeSpecName: "kube-api-access-4zxq6") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "kube-api-access-4zxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.112974 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.123688 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.133953 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.136046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.136908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config" (OuterVolumeSpecName: "config") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.141148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97c27e55-f0a0-4253-b573-21c027992fe7" (UID: "97c27e55-f0a0-4253-b573-21c027992fe7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.143997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.150435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data" (OuterVolumeSpecName: "config-data") pod "461dc704-1698-4a81-bb65-4009ee43495d" (UID: "461dc704-1698-4a81-bb65-4009ee43495d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186142 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186182 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186196 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186207 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186216 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186227 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186260 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf6x6\" (UniqueName: \"kubernetes.io/projected/97c27e55-f0a0-4253-b573-21c027992fe7-kube-api-access-sf6x6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186273 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186285 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461dc704-1698-4a81-bb65-4009ee43495d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186294 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461dc704-1698-4a81-bb65-4009ee43495d-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186307 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186318 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97c27e55-f0a0-4253-b573-21c027992fe7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186331 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxq6\" (UniqueName: \"kubernetes.io/projected/461dc704-1698-4a81-bb65-4009ee43495d-kube-api-access-4zxq6\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.186375 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.203459 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.287592 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" event={"ID":"97c27e55-f0a0-4253-b573-21c027992fe7","Type":"ContainerDied","Data":"86f086e1554176bb192e9a0f40187bc917685a90d4baa1f41b7eedcf9aeba502"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415878 4820 scope.go:117] "RemoveContainer" containerID="768c0701e8f8f7783ec7add20fa58d3a392d65a4a41a9f5f3a7c5d275fa45505" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.415567 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.418308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerStarted","Data":"550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.418371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerStarted","Data":"6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.420774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"316968d3-d62b-4a31-b157-02f4a33cd175","Type":"ContainerDied","Data":"7ac42339ffb42ecc0717cb27e6d9608813dcb6377518f31c7fcea3928ee2ca43"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.420820 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.425024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"461dc704-1698-4a81-bb65-4009ee43495d","Type":"ContainerDied","Data":"d645e31fcb0bd8bc7a86806a34c23d7f268977e0bb1931333e7f6821ba2aef6d"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.425086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.431749 4820 generic.go:334] "Generic (PLEG): container finished" podID="085b95c8-2602-461b-8a08-91aff75f97a0" containerID="52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3" exitCode=0 Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.431805 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerDied","Data":"52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.433750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.435385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerStarted","Data":"2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445"} Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.446937 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s76l5" podStartSLOduration=13.446918079 podStartE2EDuration="13.446918079s" podCreationTimestamp="2026-02-21 07:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:51.440980616 +0000 UTC m=+1126.474064814" watchObservedRunningTime="2026-02-21 07:05:51.446918079 +0000 UTC m=+1126.480002277" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.452690 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b\\\"\"" pod="openstack/cinder-db-sync-vfn4b" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.453031 4820 scope.go:117] "RemoveContainer" containerID="b23b69dde5d8d2db7290e326e8c103f21a46fecab91f2fe5987461b750aca0cf" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.477475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.484756 4820 scope.go:117] "RemoveContainer" containerID="8f493bbc1bdd41dab70a0b09a276db32fe5937f4e4911f40ec588748ad330aae" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.487921 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-mhcgl"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.500156 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-smnkd" podStartSLOduration=2.312935826 podStartE2EDuration="24.500138252s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.50230363 +0000 UTC m=+1103.535387828" lastFinishedPulling="2026-02-21 07:05:50.689506056 +0000 UTC m=+1125.722590254" observedRunningTime="2026-02-21 07:05:51.492516473 +0000 UTC m=+1126.525600681" watchObservedRunningTime="2026-02-21 07:05:51.500138252 +0000 UTC m=+1126.533222450" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.512450 4820 scope.go:117] "RemoveContainer" containerID="41a454544d0148d2faee476a316b896e6f909f8bf8b2d3744f1b86f2fa20f98f" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.542078 4820 scope.go:117] "RemoveContainer" containerID="45be5b555db34316ff34ee5039f0067478ff5a66ee6b2f029e0fcf1d6806fecd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.546272 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.570803 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.579749 4820 scope.go:117] "RemoveContainer" containerID="b020d2fe428151da7b2eb896196f55cc5b48b664fc8e307cd9ea0ea9a7eb0952" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.590678 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591062 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="init" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591082 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="init" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591092 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591102 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591122 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591128 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591139 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591166 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: E0221 07:05:51.591184 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591189 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591351 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591369 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591393 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-httpd" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.591401 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="461dc704-1698-4a81-bb65-4009ee43495d" containerName="glance-log" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.592283 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.606263 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.607005 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.607416 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.608313 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.608505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bl7bk" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.619842 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.628113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.635093 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.636657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.638864 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.639145 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.641804 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697667 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.697918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.699548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.713287 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316968d3-d62b-4a31-b157-02f4a33cd175" path="/var/lib/kubelet/pods/316968d3-d62b-4a31-b157-02f4a33cd175/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.714178 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461dc704-1698-4a81-bb65-4009ee43495d" path="/var/lib/kubelet/pods/461dc704-1698-4a81-bb65-4009ee43495d/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.716986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" path="/var/lib/kubelet/pods/97c27e55-f0a0-4253-b573-21c027992fe7/volumes" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801935 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.801998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802032 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802253 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802320 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.802410 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803029 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803045 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.803298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.808358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.809032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.809816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.812038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.822657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.833836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.903719 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904867 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.904979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.905374 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.909956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.910836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.914792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.915216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.915826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.921891 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.928028 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.935514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.944703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"glance-default-internal-api-0\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:05:51 crc kubenswrapper[4820]: I0221 07:05:51.959024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.501080 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.608076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:05:52 crc kubenswrapper[4820]: W0221 07:05:52.633742 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f5a553e_c548_455a_83e2_87f8f71f3067.slice/crio-52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd WatchSource:0}: Error finding container 52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd: Status 404 returned error can't find the container with id 52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd Feb 21 07:05:52 crc kubenswrapper[4820]: I0221 07:05:52.909766 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.038627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") pod \"085b95c8-2602-461b-8a08-91aff75f97a0\" (UID: \"085b95c8-2602-461b-8a08-91aff75f97a0\") " Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.044402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq" (OuterVolumeSpecName: "kube-api-access-vjnzq") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "kube-api-access-vjnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.097081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config" (OuterVolumeSpecName: "config") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.138430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085b95c8-2602-461b-8a08-91aff75f97a0" (UID: "085b95c8-2602-461b-8a08-91aff75f97a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145284 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjnzq\" (UniqueName: \"kubernetes.io/projected/085b95c8-2602-461b-8a08-91aff75f97a0-kube-api-access-vjnzq\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145385 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.145467 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/085b95c8-2602-461b-8a08-91aff75f97a0-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.469486 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lj8d2" event={"ID":"085b95c8-2602-461b-8a08-91aff75f97a0","Type":"ContainerDied","Data":"ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471030 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba77ea1a8e56334ddcc0c11ab2474c1a360646ae3121c5554dc3dabd168e0eca" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.471084 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lj8d2" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.482262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.482302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"55817b22512b4f79b05a91fa0314cc7452c7e5542175c8a9531d82ddc3a3f526"} Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.775495 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:53 crc kubenswrapper[4820]: E0221 07:05:53.775931 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.775949 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.776184 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" containerName="neutron-db-sync" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.779272 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.801403 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.870843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.873737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.967463 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.969296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.974988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975038 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975718 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.975978 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976540 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976637 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qfdgf" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.976880 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.978071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.978360 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:53 crc kubenswrapper[4820]: I0221 07:05:53.990333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.004993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"dnsmasq-dns-77d55b9c69-vznz8\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077092 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.077455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.119892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.179725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.183978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.185600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.199700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.200636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.208435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"neutron-7777947948-b8bjv\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.310330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.516115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerStarted","Data":"94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e"} Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.524566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4"} Feb 21 07:05:54 crc kubenswrapper[4820]: I0221 07:05:54.757479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.005172 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.084026 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-689df5d84f-mhcgl" podUID="97c27e55-f0a0-4253-b573-21c027992fe7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.536039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerStarted","Data":"f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.539645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerStarted","Data":"3e28ba467d144d224a1ff3d02bb67eaf401e7d86630f2424dc064e42e81ffa60"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.541551 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca0fc508-843f-44b4-96a4-83072d14662c" containerID="d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110" exitCode=0 Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.542410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.542445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerStarted","Data":"605148f1eaf42ee14c34a0ed19827ae005b65705aabb872c28cb0ecdd7dd5d16"} Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.589143 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.589125179 podStartE2EDuration="4.589125179s" podCreationTimestamp="2026-02-21 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:55.557695001 +0000 UTC m=+1130.590779209" watchObservedRunningTime="2026-02-21 07:05:55.589125179 +0000 UTC m=+1130.622209387" Feb 21 07:05:55 crc kubenswrapper[4820]: I0221 07:05:55.645225 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.64520382 podStartE2EDuration="4.64520382s" podCreationTimestamp="2026-02-21 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:55.637247413 +0000 UTC m=+1130.670331641" watchObservedRunningTime="2026-02-21 07:05:55.64520382 +0000 UTC m=+1130.678288018" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.555055 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerStarted","Data":"6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.555447 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.558659 4820 generic.go:334] "Generic (PLEG): container finished" podID="a9866838-084f-4340-b72d-5dba3461661e" containerID="550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3" exitCode=0 Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.558712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerDied","Data":"550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.561572 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerID="2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445" exitCode=0 Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.562546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerDied","Data":"2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445"} Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.563146 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.582201 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.584147 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.587698 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.588032 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.602372 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" podStartSLOduration=3.6023534980000003 podStartE2EDuration="3.602353498s" podCreationTimestamp="2026-02-21 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:56.58303497 +0000 UTC m=+1131.616119168" watchObservedRunningTime="2026-02-21 07:05:56.602353498 +0000 UTC m=+1131.635437696" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.622418 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.632508 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7777947948-b8bjv" podStartSLOduration=3.6324865600000003 podStartE2EDuration="3.63248656s" podCreationTimestamp="2026-02-21 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:05:56.621865941 +0000 UTC m=+1131.654950159" watchObservedRunningTime="2026-02-21 07:05:56.63248656 +0000 UTC m=+1131.665570758" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.752662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.752821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.753919 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.855810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.855876 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856040 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.856904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.857314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.862298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.862897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.863755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.872465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.873144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.874006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.882047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"neutron-85dd5db455-fl7mt\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:56 crc kubenswrapper[4820]: I0221 07:05:56.902746 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.415369 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.543035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.588818 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.589105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.589279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") pod \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\" (UID: \"f9b51414-aa8f-49ad-b662-b3c44eb0bc62\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.598077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600365 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb" (OuterVolumeSpecName: "kube-api-access-svdtb") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "kube-api-access-svdtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-smnkd" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-smnkd" event={"ID":"f9b51414-aa8f-49ad-b662-b3c44eb0bc62","Type":"ContainerDied","Data":"17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f"} Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.600870 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a0db325762105ee3f17079844c6e2a58dff4258e4ae6c4099739f2cd6e0a2f" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s76l5" event={"ID":"a9866838-084f-4340-b72d-5dba3461661e","Type":"ContainerDied","Data":"6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2"} Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611773 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6648d21ac045e633a749ffcd89f6a738d8e2e691df6270b6eb372db55cf2ebb2" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.611863 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s76l5" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.623626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b51414-aa8f-49ad-b662-b3c44eb0bc62" (UID: "f9b51414-aa8f-49ad-b662-b3c44eb0bc62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691247 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") pod \"a9866838-084f-4340-b72d-5dba3461661e\" (UID: \"a9866838-084f-4340-b72d-5dba3461661e\") " Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691978 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.691998 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.692008 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svdtb\" (UniqueName: \"kubernetes.io/projected/f9b51414-aa8f-49ad-b662-b3c44eb0bc62-kube-api-access-svdtb\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.696404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.699150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl" (OuterVolumeSpecName: "kube-api-access-x55sl") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "kube-api-access-x55sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.702927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts" (OuterVolumeSpecName: "scripts") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.703066 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.744363 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.749193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data" (OuterVolumeSpecName: "config-data") pod "a9866838-084f-4340-b72d-5dba3461661e" (UID: "a9866838-084f-4340-b72d-5dba3461661e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.768661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:58 crc kubenswrapper[4820]: E0221 07:05:58.769080 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.769101 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: E0221 07:05:58.769137 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.769145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.774531 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" containerName="barbican-db-sync" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.774596 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9866838-084f-4340-b72d-5dba3461661e" containerName="keystone-bootstrap" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.775402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.777788 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.781125 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793743 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793778 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793788 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x55sl\" (UniqueName: \"kubernetes.io/projected/a9866838-084f-4340-b72d-5dba3461661e-kube-api-access-x55sl\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793798 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793808 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.793817 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9866838-084f-4340-b72d-5dba3461661e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.796020 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895825 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895958 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.895994 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.896171 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.977740 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.997956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:58 crc kubenswrapper[4820]: I0221 07:05:58.998351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.003016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.004677 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.006143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.007716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.007789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.010303 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.017751 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.039081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"keystone-665c5b9dff-g2t96\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.068358 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.069726 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.076968 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zjnng" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.077167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.077688 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.090834 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.091022 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.133981 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.135428 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.145664 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.185315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208309 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.208363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.217474 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.218885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.289724 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.333938 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.334369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.344407 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.344856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.346691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.349208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.352331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.352714 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" containerID="cri-o://6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" gracePeriod=10 Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.367070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"barbican-worker-758b5755fc-2m84q\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.370261 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.372801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.406873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.408768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.420643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.420739 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436053 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436108 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436321 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436405 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436483 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.436722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.438532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.459474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.463263 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.463462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.466152 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.472479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474508 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"barbican-worker-867cbf55-jx754\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.474588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"barbican-keystone-listener-559489d5f8-ngqx9\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.482058 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.490373 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.514471 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.516358 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.526254 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.538295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539587 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.539626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.545833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547031 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.549363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.548593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.548781 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.547843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.551740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.554827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.563465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"dnsmasq-dns-7489f6876c-2n9gl\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.584401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"barbican-keystone-listener-79b8cb94b4-h6tqh\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.641918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.645530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.645781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.655664 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca0fc508-843f-44b4-96a4-83072d14662c" containerID="6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" exitCode=0 Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.655953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.657320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.674984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.675041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"73fe748c020d9cdb0f7411013cf334c00e8fbd8633affe05f3bd15d54091bf15"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.680608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.695320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.709891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5"} Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.747810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.748005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.748136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.750107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.754873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.758479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.766354 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.769463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:05:59 crc kubenswrapper[4820]: I0221 07:05:59.773934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"barbican-api-8678d9479b-vqsct\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.014144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.096831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.126310 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.198467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260360 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.260577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") pod \"ca0fc508-843f-44b4-96a4-83072d14662c\" (UID: \"ca0fc508-843f-44b4-96a4-83072d14662c\") " Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.273201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g" (OuterVolumeSpecName: "kube-api-access-8mc4g") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "kube-api-access-8mc4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.367459 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mc4g\" (UniqueName: \"kubernetes.io/projected/ca0fc508-843f-44b4-96a4-83072d14662c-kube-api-access-8mc4g\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.407093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.418736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.419034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.419652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.423921 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config" (OuterVolumeSpecName: "config") pod "ca0fc508-843f-44b4-96a4-83072d14662c" (UID: "ca0fc508-843f-44b4-96a4-83072d14662c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477474 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477504 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477516 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477527 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.477535 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0fc508-843f-44b4-96a4-83072d14662c-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.706911 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.746366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"232aac902ab163c61332ca9251f3b8bd22a0d25dd116a7153f1bb796d475d539"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.749778 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.755996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"fb422d822e894d47a3283c85ceaf5b546e6dcaf88367608fcf5a454edd87769f"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.771461 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773458 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77d55b9c69-vznz8" event={"ID":"ca0fc508-843f-44b4-96a4-83072d14662c","Type":"ContainerDied","Data":"605148f1eaf42ee14c34a0ed19827ae005b65705aabb872c28cb0ecdd7dd5d16"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.773982 4820 scope.go:117] "RemoveContainer" containerID="6fd5a1ea5010e663397764292f55e9aa3ae60e4cd8178288a9eb433b1fc97861" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.775505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerStarted","Data":"200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.775553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerStarted","Data":"64f0896a03976792d3631a63a19b92a0be5d44121ab07ab2ac5e458129f71510"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.776223 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.780566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"e37ea0169f5b1d136331cf197b065bd27c292073bd4e6a5a36c7265891cbd6b0"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.786926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerStarted","Data":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.787215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.821777 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-665c5b9dff-g2t96" podStartSLOduration=2.821757826 podStartE2EDuration="2.821757826s" podCreationTimestamp="2026-02-21 07:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:00.805787091 +0000 UTC m=+1135.838871299" watchObservedRunningTime="2026-02-21 07:06:00.821757826 +0000 UTC m=+1135.854842024" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.842006 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85dd5db455-fl7mt" podStartSLOduration=4.841986639 podStartE2EDuration="4.841986639s" podCreationTimestamp="2026-02-21 07:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:00.832277454 +0000 UTC m=+1135.865361652" watchObservedRunningTime="2026-02-21 07:06:00.841986639 +0000 UTC m=+1135.875070837" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.860406 4820 scope.go:117] "RemoveContainer" containerID="d3650ad3fcb47b7036345aad44f6f82c057e9abcdc67b5b3bbf796f227d25110" Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.870789 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.881493 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77d55b9c69-vznz8"] Feb 21 07:06:00 crc kubenswrapper[4820]: I0221 07:06:00.967559 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.739359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" path="/var/lib/kubelet/pods/ca0fc508-843f-44b4-96a4-83072d14662c/volumes" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.842515 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerStarted","Data":"5bed7530faf105f1f1bc8124a0e0b6da645917e74dd6cbd033eab92c51acc5f7"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.843633 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.843656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.845633 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:01 crc kubenswrapper[4820]: E0221 07:06:01.846080 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="init" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846093 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="init" Feb 21 07:06:01 crc kubenswrapper[4820]: E0221 07:06:01.846116 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846123 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.846347 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0fc508-843f-44b4-96a4-83072d14662c" containerName="dnsmasq-dns" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.847538 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849378 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerID="6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38" exitCode=0 Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.849477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerStarted","Data":"f12c1a8e0db096347f19d2697b9e9331aac42f90a3217a3038a39188f188a441"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.855221 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.857910 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.868188 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"9e23535ae9303b01da633c9a5de5b1cca080fe7244d856307bd78e440fdb1a72"} Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.910892 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.912111 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8678d9479b-vqsct" podStartSLOduration=2.91209375 podStartE2EDuration="2.91209375s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:01.878200965 +0000 UTC m=+1136.911285163" watchObservedRunningTime="2026-02-21 07:06:01.91209375 +0000 UTC m=+1136.945177938" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.922841 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.922886 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.931912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.932345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.952336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.960674 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.960731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:01 crc kubenswrapper[4820]: I0221 07:06:01.981539 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.016013 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.022500 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035210 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.035319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.038087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.047994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.048543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.050164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.051027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.051881 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.065677 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"barbican-api-76b79c9766-s694g\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.325997 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.887977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888064 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:06:02 crc kubenswrapper[4820]: I0221 07:06:02.888330 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:06:03 crc kubenswrapper[4820]: I0221 07:06:03.907202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerStarted","Data":"d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.216007 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" podStartSLOduration=5.215982542 podStartE2EDuration="5.215982542s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:03.939248096 +0000 UTC m=+1138.972332304" watchObservedRunningTime="2026-02-21 07:06:04.215982542 +0000 UTC m=+1139.249066750" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.219827 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:06:04 crc kubenswrapper[4820]: W0221 07:06:04.228930 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4709782f_54e7_4a78_a56e_8f58a5556501.slice/crio-c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6 WatchSource:0}: Error finding container c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6: Status 404 returned error can't find the container with id c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6 Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.683526 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.925755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.930050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.930430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.932457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.933958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.935521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerStarted","Data":"e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb"} Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938944 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.938955 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.939391 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:04 crc kubenswrapper[4820]: I0221 07:06:04.939406 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.773894 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wdvf7" podStartSLOduration=3.859869468 podStartE2EDuration="38.773874344s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.517953408 +0000 UTC m=+1103.551037596" lastFinishedPulling="2026-02-21 07:06:03.431958274 +0000 UTC m=+1138.465042472" observedRunningTime="2026-02-21 07:06:04.951519998 +0000 UTC m=+1139.984604196" watchObservedRunningTime="2026-02-21 07:06:05.773874344 +0000 UTC m=+1140.806958542" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.861723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.877893 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerStarted","Data":"84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792"} Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998533 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:05 crc kubenswrapper[4820]: I0221 07:06:05.998555 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.031500 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerStarted","Data":"88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.037902 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerStarted","Data":"df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.045951 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76b79c9766-s694g" podStartSLOduration=5.045931542 podStartE2EDuration="5.045931542s" podCreationTimestamp="2026-02-21 07:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:06.035889598 +0000 UTC m=+1141.068973796" watchObservedRunningTime="2026-02-21 07:06:06.045931542 +0000 UTC m=+1141.079015740" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.046815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerStarted","Data":"629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.071911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerStarted","Data":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.078859 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.078949 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.086122 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-758b5755fc-2m84q" podStartSLOduration=3.790335042 podStartE2EDuration="7.086097129s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.126747288 +0000 UTC m=+1135.159831486" lastFinishedPulling="2026-02-21 07:06:03.422509375 +0000 UTC m=+1138.455593573" observedRunningTime="2026-02-21 07:06:06.058384842 +0000 UTC m=+1141.091469040" watchObservedRunningTime="2026-02-21 07:06:06.086097129 +0000 UTC m=+1141.119181327" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.096307 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podStartSLOduration=3.9100293 podStartE2EDuration="7.096287487s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.245708197 +0000 UTC m=+1135.278792395" lastFinishedPulling="2026-02-21 07:06:03.431966384 +0000 UTC m=+1138.465050582" observedRunningTime="2026-02-21 07:06:06.087653112 +0000 UTC m=+1141.120737310" watchObservedRunningTime="2026-02-21 07:06:06.096287487 +0000 UTC m=+1141.129371685" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.159248 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podStartSLOduration=4.807066576 podStartE2EDuration="7.159217966s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.882458185 +0000 UTC m=+1135.915542383" lastFinishedPulling="2026-02-21 07:06:03.234609575 +0000 UTC m=+1138.267693773" observedRunningTime="2026-02-21 07:06:06.118143275 +0000 UTC m=+1141.151227473" watchObservedRunningTime="2026-02-21 07:06:06.159217966 +0000 UTC m=+1141.192302154" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.173853 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.194143 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-867cbf55-jx754" podStartSLOduration=4.315637676 podStartE2EDuration="7.194123739s" podCreationTimestamp="2026-02-21 07:05:59 +0000 UTC" firstStartedPulling="2026-02-21 07:06:00.722204188 +0000 UTC m=+1135.755288386" lastFinishedPulling="2026-02-21 07:06:03.600690251 +0000 UTC m=+1138.633774449" observedRunningTime="2026-02-21 07:06:06.182549433 +0000 UTC m=+1141.215633651" watchObservedRunningTime="2026-02-21 07:06:06.194123739 +0000 UTC m=+1141.227207937" Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.223479 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:06 crc kubenswrapper[4820]: I0221 07:06:06.429838 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:07 crc kubenswrapper[4820]: I0221 07:06:07.839491 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.094567 4820 generic.go:334] "Generic (PLEG): container finished" podID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerID="e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229" exitCode=0 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095033 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" containerID="cri-o://669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.094704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerDied","Data":"e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229"} Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095170 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" containerID="cri-o://629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095396 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-758b5755fc-2m84q" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" containerID="cri-o://a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" gracePeriod=30 Feb 21 07:06:08 crc kubenswrapper[4820]: I0221 07:06:08.095460 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-758b5755fc-2m84q" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" containerID="cri-o://88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" gracePeriod=30 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115114 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerID="88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" exitCode=0 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115174 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerID="a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" exitCode=143 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.115292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117180 4820 generic.go:334] "Generic (PLEG): container finished" podID="4aea1771-69e2-4735-b813-9a8214a2227c" containerID="629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" exitCode=0 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117269 4820 generic.go:334] "Generic (PLEG): container finished" podID="4aea1771-69e2-4735-b813-9a8214a2227c" containerID="669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" exitCode=143 Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117422 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.117529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb"} Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.556639 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.683108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.749043 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:09 crc kubenswrapper[4820]: I0221 07:06:09.749333 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" containerID="cri-o://fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" gracePeriod=10 Feb 21 07:06:10 crc kubenswrapper[4820]: I0221 07:06:10.133629 4820 generic.go:334] "Generic (PLEG): container finished" podID="375bfff4-76af-4f71-a665-c409feeb6f67" containerID="fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" exitCode=0 Feb 21 07:06:10 crc kubenswrapper[4820]: I0221 07:06:10.133678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173"} Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.111364 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.337421 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464573 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.464914 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") pod \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\" (UID: \"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.465677 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs" (OuterVolumeSpecName: "logs") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.469004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d" (OuterVolumeSpecName: "kube-api-access-tcn9d") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "kube-api-access-tcn9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.471391 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts" (OuterVolumeSpecName: "scripts") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.502040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data" (OuterVolumeSpecName: "config-data") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.503511 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" (UID: "e2b995bf-93f1-4f28-a1a6-0d13ac9ca744"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.534228 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567838 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567865 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567875 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567903 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.567915 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcn9d\" (UniqueName: \"kubernetes.io/projected/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744-kube-api-access-tcn9d\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: E0221 07:06:13.663060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669196 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669439 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.669523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") pod \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\" (UID: \"f42a19be-1d8d-45f5-a92e-95b3fc416db7\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.677634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg" (OuterVolumeSpecName: "kube-api-access-glrwg") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "kube-api-access-glrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.677721 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs" (OuterVolumeSpecName: "logs") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.689654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.721736 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.733137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.736194 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.763689 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data" (OuterVolumeSpecName: "config-data") pod "f42a19be-1d8d-45f5-a92e-95b3fc416db7" (UID: "f42a19be-1d8d-45f5-a92e-95b3fc416db7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773027 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.773710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") pod \"4aea1771-69e2-4735-b813-9a8214a2227c\" (UID: \"4aea1771-69e2-4735-b813-9a8214a2227c\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776536 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776766 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776791 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrwg\" (UniqueName: \"kubernetes.io/projected/f42a19be-1d8d-45f5-a92e-95b3fc416db7-kube-api-access-glrwg\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.776805 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42a19be-1d8d-45f5-a92e-95b3fc416db7-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.777059 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42a19be-1d8d-45f5-a92e-95b3fc416db7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.781389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs" (OuterVolumeSpecName: "logs") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.783399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv" (OuterVolumeSpecName: "kube-api-access-jbrbv") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "kube-api-access-jbrbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.783497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.799785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.826411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data" (OuterVolumeSpecName: "config-data") pod "4aea1771-69e2-4735-b813-9a8214a2227c" (UID: "4aea1771-69e2-4735-b813-9a8214a2227c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.843811 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.878864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879051 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") pod \"375bfff4-76af-4f71-a665-c409feeb6f67\" (UID: \"375bfff4-76af-4f71-a665-c409feeb6f67\") " Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879807 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4aea1771-69e2-4735-b813-9a8214a2227c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879829 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrbv\" (UniqueName: \"kubernetes.io/projected/4aea1771-69e2-4735-b813-9a8214a2227c-kube-api-access-jbrbv\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879841 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879850 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.879860 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aea1771-69e2-4735-b813-9a8214a2227c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.883136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs" (OuterVolumeSpecName: "kube-api-access-mbqjs") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "kube-api-access-mbqjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.928456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.928966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.933624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config" (OuterVolumeSpecName: "config") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.936960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.948909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "375bfff4-76af-4f71-a665-c409feeb6f67" (UID: "375bfff4-76af-4f71-a665-c409feeb6f67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982375 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982418 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982431 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982444 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982456 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqjs\" (UniqueName: \"kubernetes.io/projected/375bfff4-76af-4f71-a665-c409feeb6f67-kube-api-access-mbqjs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.982467 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/375bfff4-76af-4f71-a665-c409feeb6f67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:13 crc kubenswrapper[4820]: I0221 07:06:13.986215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045257 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045524 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" containerID="cri-o://9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.045912 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" containerID="cri-o://eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" event={"ID":"4aea1771-69e2-4735-b813-9a8214a2227c","Type":"ContainerDied","Data":"e37ea0169f5b1d136331cf197b065bd27c292073bd4e6a5a36c7265891cbd6b0"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207361 4820 scope.go:117] "RemoveContainer" containerID="629f3b6e3f1bcb0984983a5587dbd2bcaca3ff8f40db0958e208860ec30ff25e" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.207489 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-559489d5f8-ngqx9" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.215517 4820 generic.go:334] "Generic (PLEG): container finished" podID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerID="9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" exitCode=143 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.215589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wdvf7" event={"ID":"e2b995bf-93f1-4f28-a1a6-0d13ac9ca744","Type":"ContainerDied","Data":"cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218523 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4fb96b39e1936b86af57f1db3fb5919410cadbcdd356f24dc36d2766a16bc7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.218763 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wdvf7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.233803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" event={"ID":"375bfff4-76af-4f71-a665-c409feeb6f67","Type":"ContainerDied","Data":"0deee66ca0c914e04051643e2ef7f61bf67d60020463554eb611d4a4dbdb4fc8"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.233899 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68bc8f6695-dvmcz" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.244644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-758b5755fc-2m84q" event={"ID":"f42a19be-1d8d-45f5-a92e-95b3fc416db7","Type":"ContainerDied","Data":"fb422d822e894d47a3283c85ceaf5b546e6dcaf88367608fcf5a454edd87769f"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.245193 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-758b5755fc-2m84q" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.257885 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" containerID="cri-o://9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerStarted","Data":"cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7"} Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258759 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" containerID="cri-o://cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.258865 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" containerID="cri-o://a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" gracePeriod=30 Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.325510 4820 scope.go:117] "RemoveContainer" containerID="669ae930f380ee63a72d09d4f6912014d9f3d432369709db082d870a245ab5bb" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.350596 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.353595 4820 scope.go:117] "RemoveContainer" containerID="fde577041ba66e346a36a4b20611073001e0ace822b909662a854ab13a1c8173" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.354097 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68bc8f6695-dvmcz"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.374044 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.384369 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-559489d5f8-ngqx9"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.418052 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.430924 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-758b5755fc-2m84q"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.432599 4820 scope.go:117] "RemoveContainer" containerID="18dc85665c905eaff86848c97e8cbe825cac87dcc411dd90b770e67c8c997f65" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.474563 4820 scope.go:117] "RemoveContainer" containerID="88c3991a2e310c1f9f3f33dda774b93f30bbb6da073076be213f21212b50f54c" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507509 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507852 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507868 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507882 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507888 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507896 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="init" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507903 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="init" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507915 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507921 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507937 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507955 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507961 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: E0221 07:06:14.507971 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.507977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508133 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508301 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" containerName="dnsmasq-dns" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508332 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" containerName="barbican-keystone-listener" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508350 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker-log" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508358 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" containerName="barbican-worker" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.508370 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" containerName="placement-db-sync" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.509299 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.511073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.511404 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.512489 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.512907 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.514822 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p47r7" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.527280 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.577124 4820 scope.go:117] "RemoveContainer" containerID="a26ae2476946948a1aed61bb4a1df1c583b774309171588d19e62631081c841e" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.595856 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697840 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697927 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.697963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.698683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703880 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.703953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.725753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"placement-85cb846b98-bwgbn\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:14 crc kubenswrapper[4820]: I0221 07:06:14.861868 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.265897 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerStarted","Data":"902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270592 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" exitCode=0 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270617 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" exitCode=2 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.270669 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5"} Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.284424 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vfn4b" podStartSLOduration=3.446484431 podStartE2EDuration="48.284407318s" podCreationTimestamp="2026-02-21 07:05:27 +0000 UTC" firstStartedPulling="2026-02-21 07:05:28.517916497 +0000 UTC m=+1103.551000695" lastFinishedPulling="2026-02-21 07:06:13.355839384 +0000 UTC m=+1148.388923582" observedRunningTime="2026-02-21 07:06:15.280740378 +0000 UTC m=+1150.313824576" watchObservedRunningTime="2026-02-21 07:06:15.284407318 +0000 UTC m=+1150.317491516" Feb 21 07:06:15 crc kubenswrapper[4820]: W0221 07:06:15.357997 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7e0258_f8e3_4e7c_8a4d_aec3ee4d2ffe.slice/crio-d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08 WatchSource:0}: Error finding container d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08: Status 404 returned error can't find the container with id d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08 Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.361302 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.731410 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375bfff4-76af-4f71-a665-c409feeb6f67" path="/var/lib/kubelet/pods/375bfff4-76af-4f71-a665-c409feeb6f67/volumes" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.733076 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aea1771-69e2-4735-b813-9a8214a2227c" path="/var/lib/kubelet/pods/4aea1771-69e2-4735-b813-9a8214a2227c/volumes" Feb 21 07:06:15 crc kubenswrapper[4820]: I0221 07:06:15.733904 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42a19be-1d8d-45f5-a92e-95b3fc416db7" path="/var/lib/kubelet/pods/f42a19be-1d8d-45f5-a92e-95b3fc416db7/volumes" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerStarted","Data":"d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08"} Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280729 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.280751 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:16 crc kubenswrapper[4820]: I0221 07:06:16.308109 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85cb846b98-bwgbn" podStartSLOduration=2.308085301 podStartE2EDuration="2.308085301s" podCreationTimestamp="2026-02-21 07:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:16.304113553 +0000 UTC m=+1151.337197751" watchObservedRunningTime="2026-02-21 07:06:16.308085301 +0000 UTC m=+1151.341169499" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.224679 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:41894->10.217.0.162:9311: read: connection reset by peer" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.224722 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8678d9479b-vqsct" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:41896->10.217.0.162:9311: read: connection reset by peer" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.290660 4820 generic.go:334] "Generic (PLEG): container finished" podID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerID="eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" exitCode=0 Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.291683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d"} Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.657381 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756190 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.756493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") pod \"1d2a71d7-f0a3-47e2-9594-303d2240043a\" (UID: \"1d2a71d7-f0a3-47e2-9594-303d2240043a\") " Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.762788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs" (OuterVolumeSpecName: "logs") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.778114 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf" (OuterVolumeSpecName: "kube-api-access-6bncf") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "kube-api-access-6bncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.809557 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.813422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858488 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858737 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bncf\" (UniqueName: \"kubernetes.io/projected/1d2a71d7-f0a3-47e2-9594-303d2240043a-kube-api-access-6bncf\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858751 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2a71d7-f0a3-47e2-9594-303d2240043a-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.858761 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.859168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data" (OuterVolumeSpecName: "config-data") pod "1d2a71d7-f0a3-47e2-9594-303d2240043a" (UID: "1d2a71d7-f0a3-47e2-9594-303d2240043a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:17 crc kubenswrapper[4820]: I0221 07:06:17.960940 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2a71d7-f0a3-47e2-9594-303d2240043a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.299292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8678d9479b-vqsct" event={"ID":"1d2a71d7-f0a3-47e2-9594-303d2240043a","Type":"ContainerDied","Data":"5bed7530faf105f1f1bc8124a0e0b6da645917e74dd6cbd033eab92c51acc5f7"} Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.300533 4820 scope.go:117] "RemoveContainer" containerID="eefd2f11abd4007503e1948f668682aa77a956178ce8940a23b4f0cd82e4017d" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.300746 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8678d9479b-vqsct" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.307793 4820 generic.go:334] "Generic (PLEG): container finished" podID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerID="9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" exitCode=0 Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.307836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805"} Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.344228 4820 scope.go:117] "RemoveContainer" containerID="9aecb2af3009bc608fea166750c2b4fb589074d7853e9934bcc3f142b21868ec" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.357754 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.370835 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8678d9479b-vqsct"] Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.650588 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772529 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772671 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772772 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.772826 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") pod \"a3cce54d-5f2a-4e51-864d-03e55b50d698\" (UID: \"a3cce54d-5f2a-4e51-864d-03e55b50d698\") " Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.774020 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.781503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w" (OuterVolumeSpecName: "kube-api-access-9vr9w") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "kube-api-access-9vr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.781994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.796520 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts" (OuterVolumeSpecName: "scripts") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.842714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874943 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874982 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.874994 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3cce54d-5f2a-4e51-864d-03e55b50d698-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.875005 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.875020 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vr9w\" (UniqueName: \"kubernetes.io/projected/a3cce54d-5f2a-4e51-864d-03e55b50d698-kube-api-access-9vr9w\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.883382 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data" (OuterVolumeSpecName: "config-data") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.901439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3cce54d-5f2a-4e51-864d-03e55b50d698" (UID: "a3cce54d-5f2a-4e51-864d-03e55b50d698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.976772 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:18 crc kubenswrapper[4820]: I0221 07:06:18.976805 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3cce54d-5f2a-4e51-864d-03e55b50d698-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.319790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3cce54d-5f2a-4e51-864d-03e55b50d698","Type":"ContainerDied","Data":"0f8176927ad01d0eb54f7e8ca55f1bbe340ac767367622047b311589a963df40"} Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.320121 4820 scope.go:117] "RemoveContainer" containerID="cb629ad0bbb7b9acd7f005e921ba221a260f2358550412aca3a0d13dac46f4b7" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.320264 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.339146 4820 scope.go:117] "RemoveContainer" containerID="a3ae24ef827f682ba1110ccd6e6f98b2ddca11c1d6ed5c47dbaeb182af499ae5" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.366959 4820 scope.go:117] "RemoveContainer" containerID="9bdbaee398862dd842b50a8bb04bbb6638f1a2775f7a43c604f1191103118805" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.383294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.396594 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.405774 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406273 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406288 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406293 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406308 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: E0221 07:06:19.406328 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406489 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="proxy-httpd" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406503 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="sg-core" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406519 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api-log" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406530 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" containerName="barbican-api" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.406540 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" containerName="ceilometer-notification-agent" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.408465 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.410877 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.412106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.420357 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.485923 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.486335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588586 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.588633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.589314 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.589564 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.593063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.593492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.594310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.597541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.609002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"ceilometer-0\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " pod="openstack/ceilometer-0" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.719517 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2a71d7-f0a3-47e2-9594-303d2240043a" path="/var/lib/kubelet/pods/1d2a71d7-f0a3-47e2-9594-303d2240043a/volumes" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.720112 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3cce54d-5f2a-4e51-864d-03e55b50d698" path="/var/lib/kubelet/pods/a3cce54d-5f2a-4e51-864d-03e55b50d698/volumes" Feb 21 07:06:19 crc kubenswrapper[4820]: I0221 07:06:19.728372 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:20 crc kubenswrapper[4820]: I0221 07:06:20.184459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:20 crc kubenswrapper[4820]: I0221 07:06:20.330026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"391e3a8821e7a5d4d540410a63bf4ea889c64567ec635528d2b32100b2356ede"} Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.377537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15"} Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.382104 4820 generic.go:334] "Generic (PLEG): container finished" podID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerID="902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea" exitCode=0 Feb 21 07:06:21 crc kubenswrapper[4820]: I0221 07:06:21.382147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerDied","Data":"902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.391939 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.392260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb"} Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.814879 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953665 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.953730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.954359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") pod \"b400c916-2ba9-4d7e-b9f5-6044605f279c\" (UID: \"b400c916-2ba9-4d7e-b9f5-6044605f279c\") " Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.955396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.960361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g" (OuterVolumeSpecName: "kube-api-access-d2l4g") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "kube-api-access-d2l4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.969306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts" (OuterVolumeSpecName: "scripts") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:22 crc kubenswrapper[4820]: I0221 07:06:22.969366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.014230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.021380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data" (OuterVolumeSpecName: "config-data") pod "b400c916-2ba9-4d7e-b9f5-6044605f279c" (UID: "b400c916-2ba9-4d7e-b9f5-6044605f279c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057262 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057303 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057316 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2l4g\" (UniqueName: \"kubernetes.io/projected/b400c916-2ba9-4d7e-b9f5-6044605f279c-kube-api-access-d2l4g\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057330 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b400c916-2ba9-4d7e-b9f5-6044605f279c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057341 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.057351 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b400c916-2ba9-4d7e-b9f5-6044605f279c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.401390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerStarted","Data":"0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f"} Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.401543 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vfn4b" event={"ID":"b400c916-2ba9-4d7e-b9f5-6044605f279c","Type":"ContainerDied","Data":"8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24"} Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403430 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8825622824e2c5a26d801793ab024244254cf79018cd4389ed87a92f9a749c24" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.403487 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vfn4b" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.432212 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.552158224 podStartE2EDuration="4.432194219s" podCreationTimestamp="2026-02-21 07:06:19 +0000 UTC" firstStartedPulling="2026-02-21 07:06:20.187669351 +0000 UTC m=+1155.220753549" lastFinishedPulling="2026-02-21 07:06:23.067705346 +0000 UTC m=+1158.100789544" observedRunningTime="2026-02-21 07:06:23.425543627 +0000 UTC m=+1158.458627845" watchObservedRunningTime="2026-02-21 07:06:23.432194219 +0000 UTC m=+1158.465278417" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.705729 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: E0221 07:06:23.706338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.706359 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.706603 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" containerName="cinder-db-sync" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.707675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.710457 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.710902 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mmvl6" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.711178 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.711340 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.713286 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767590 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.767990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.768075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.775296 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.789140 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869426 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869516 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869670 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.869741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.870816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.873762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.874850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.875959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.879740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.886091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"cinder-scheduler-0\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.974938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.975000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.977431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.977483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.980657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.981361 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:23 crc kubenswrapper[4820]: I0221 07:06:23.999668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.001850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"dnsmasq-dns-6c69c79c7f-n4pc2\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.002203 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.005076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.029570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.076982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.077036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.077110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.092124 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178580 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.178738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.179226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.181312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.188827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.190076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.191752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.192511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.208149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"cinder-api-0\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.324360 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.392450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.566286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.676478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.701355 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.702395 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" containerID="cri-o://302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" gracePeriod=30 Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.703161 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" containerID="cri-o://a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" gracePeriod=30 Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.746682 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.750432 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.763647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.826651 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": read tcp 10.217.0.2:41850->10.217.0.155:9696: read: connection reset by peer" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900882 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900928 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:24 crc kubenswrapper[4820]: I0221 07:06:24.900964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.002992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.009589 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.010485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.012922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.012945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.028672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"neutron-7796b97765-sqvtc\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.036477 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.073510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.424898 4820 generic.go:334] "Generic (PLEG): container finished" podID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" exitCode=0 Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.424977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.430219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"6e502663719ec0c0a0a84d0c96dd6393160aec2507fa225d1ef3ff9eecb2291e"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.432842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"ff64435a47c0297e2732c2e77200493a270636c1ffcd894d5737019411ddb58e"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434581 4820 generic.go:334] "Generic (PLEG): container finished" podID="68596d31-1da0-47aa-9330-179af16beee5" containerID="aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858" exitCode=0 Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.434716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerStarted","Data":"28ad0df7b26bbd0219980c2f8c1104679c4b4d8454ba1005ca678ce2d979fa35"} Feb 21 07:06:25 crc kubenswrapper[4820]: I0221 07:06:25.714553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.135053 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.453345 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerStarted","Data":"f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.454379 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerStarted","Data":"11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.459919 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.461045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293"} Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.475888 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" podStartSLOduration=3.475871343 podStartE2EDuration="3.475871343s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:26.46847921 +0000 UTC m=+1161.501563408" watchObservedRunningTime="2026-02-21 07:06:26.475871343 +0000 UTC m=+1161.508955541" Feb 21 07:06:26 crc kubenswrapper[4820]: I0221 07:06:26.499386 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7796b97765-sqvtc" podStartSLOduration=2.499366085 podStartE2EDuration="2.499366085s" podCreationTimestamp="2026-02-21 07:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:26.487804268 +0000 UTC m=+1161.520888476" watchObservedRunningTime="2026-02-21 07:06:26.499366085 +0000 UTC m=+1161.532450283" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.156941 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254782 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.254981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") pod \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\" (UID: \"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47\") " Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.272484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.273420 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm" (OuterVolumeSpecName: "kube-api-access-hwzcm") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "kube-api-access-hwzcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.359407 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.359445 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzcm\" (UniqueName: \"kubernetes.io/projected/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-kube-api-access-hwzcm\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.371909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config" (OuterVolumeSpecName: "config") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.384014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.389952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.404139 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.452307 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" (UID: "a319e7a0-81bf-4952-80fe-9c8b4cbd3f47"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461482 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461520 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461529 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461546 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.461555 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474754 4820 generic.go:334] "Generic (PLEG): container finished" podID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" exitCode=0 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474817 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dd5db455-fl7mt" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dd5db455-fl7mt" event={"ID":"a319e7a0-81bf-4952-80fe-9c8b4cbd3f47","Type":"ContainerDied","Data":"73fe748c020d9cdb0f7411013cf334c00e8fbd8633affe05f3bd15d54091bf15"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.474995 4820 scope.go:117] "RemoveContainer" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerStarted","Data":"29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476792 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" containerID="cri-o://deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" gracePeriod=30 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476865 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.476896 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" containerID="cri-o://29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" gracePeriod=30 Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.487088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.487125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerStarted","Data":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.503925 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.503905665 podStartE2EDuration="4.503905665s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:27.497020077 +0000 UTC m=+1162.530104275" watchObservedRunningTime="2026-02-21 07:06:27.503905665 +0000 UTC m=+1162.536989873" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.527311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.538099333 podStartE2EDuration="4.527292234s" podCreationTimestamp="2026-02-21 07:06:23 +0000 UTC" firstStartedPulling="2026-02-21 07:06:24.608005707 +0000 UTC m=+1159.641089905" lastFinishedPulling="2026-02-21 07:06:25.597198608 +0000 UTC m=+1160.630282806" observedRunningTime="2026-02-21 07:06:27.520277782 +0000 UTC m=+1162.553362000" watchObservedRunningTime="2026-02-21 07:06:27.527292234 +0000 UTC m=+1162.560376432" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.557369 4820 scope.go:117] "RemoveContainer" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.578635 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.581038 4820 scope.go:117] "RemoveContainer" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: E0221 07:06:27.584912 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": container with ID starting with a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47 not found: ID does not exist" containerID="a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.584965 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47"} err="failed to get container status \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": rpc error: code = NotFound desc = could not find container \"a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47\": container with ID starting with a38e23f723da9b17a2909a2bd3ee015d2d276fd94ccf32fea9f0749440e48c47 not found: ID does not exist" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.584999 4820 scope.go:117] "RemoveContainer" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: E0221 07:06:27.585333 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": container with ID starting with 302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0 not found: ID does not exist" containerID="302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.585358 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0"} err="failed to get container status \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": rpc error: code = NotFound desc = could not find container \"302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0\": container with ID starting with 302db26d89b7fa04230cd426109d450f4b22597d20d22d09521acb3988d6dcd0 not found: ID does not exist" Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.590371 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85dd5db455-fl7mt"] Feb 21 07:06:27 crc kubenswrapper[4820]: I0221 07:06:27.711040 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" path="/var/lib/kubelet/pods/a319e7a0-81bf-4952-80fe-9c8b4cbd3f47/volumes" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504068 4820 generic.go:334] "Generic (PLEG): container finished" podID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerID="29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" exitCode=0 Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504445 4820 generic.go:334] "Generic (PLEG): container finished" podID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerID="deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" exitCode=143 Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74"} Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.504972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293"} Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.597573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.632942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633140 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") pod \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\" (UID: \"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea\") " Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.633638 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.634318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs" (OuterVolumeSpecName: "logs") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts" (OuterVolumeSpecName: "scripts") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.642855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49" (OuterVolumeSpecName: "kube-api-access-z6w49") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "kube-api-access-z6w49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.668880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.708389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data" (OuterVolumeSpecName: "config-data") pod "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" (UID: "39235ef6-27fe-4a11-b23a-c22c1cbcd1ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735888 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735930 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735941 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735952 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735963 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6w49\" (UniqueName: \"kubernetes.io/projected/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-kube-api-access-z6w49\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:28 crc kubenswrapper[4820]: I0221 07:06:28.735975 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.031292 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515758 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39235ef6-27fe-4a11-b23a-c22c1cbcd1ea","Type":"ContainerDied","Data":"6e502663719ec0c0a0a84d0c96dd6393160aec2507fa225d1ef3ff9eecb2291e"} Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515785 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.515818 4820 scope.go:117] "RemoveContainer" containerID="29982dd2117675bce3167f6c416f998d7a5bbffccfe65b29c81ed9279185df74" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.547964 4820 scope.go:117] "RemoveContainer" containerID="deed6997de2910aa36eecf00be5828573d3d0c776bceabb91c3a2cff966a2293" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.557620 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.576212 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.592405 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.592886 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.592970 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593072 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593190 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: E0221 07:06:29.593324 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593630 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593700 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api-log" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593759 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.593826 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" containerName="cinder-api" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.594871 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.604779 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607002 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.607920 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656554 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656615 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.656744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.706905 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39235ef6-27fe-4a11-b23a-c22c1cbcd1ea" path="/var/lib/kubelet/pods/39235ef6-27fe-4a11-b23a-c22c1cbcd1ea/volumes" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758017 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758362 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758636 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.758716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.759908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.765206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.767133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.767806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.775229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.779900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.782853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.811174 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"cinder-api-0\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " pod="openstack/cinder-api-0" Feb 21 07:06:29 crc kubenswrapper[4820]: I0221 07:06:29.933748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:06:30 crc kubenswrapper[4820]: W0221 07:06:30.420012 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899bd84b_c67f_4a89_9f92_a68094530566.slice/crio-5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886 WatchSource:0}: Error finding container 5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886: Status 404 returned error can't find the container with id 5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886 Feb 21 07:06:30 crc kubenswrapper[4820]: I0221 07:06:30.426382 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:06:30 crc kubenswrapper[4820]: I0221 07:06:30.527787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886"} Feb 21 07:06:31 crc kubenswrapper[4820]: I0221 07:06:31.544620 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067"} Feb 21 07:06:31 crc kubenswrapper[4820]: I0221 07:06:31.640499 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.555657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerStarted","Data":"765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c"} Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.557327 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 07:06:32 crc kubenswrapper[4820]: I0221 07:06:32.586638 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.586618158 podStartE2EDuration="3.586618158s" podCreationTimestamp="2026-02-21 07:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:32.572334608 +0000 UTC m=+1167.605418846" watchObservedRunningTime="2026-02-21 07:06:32.586618158 +0000 UTC m=+1167.619702356" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.069083 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.070697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082174 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.082397 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ntd4f" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.091356 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.111484 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.149732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.171549 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.171769 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" containerID="cri-o://d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" gracePeriod=10 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251658 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.251787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.252867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.259704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.259972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.271153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"openstackclient\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.329118 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.376510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.417882 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583385 4820 generic.go:334] "Generic (PLEG): container finished" podID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerID="d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" exitCode=0 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5"} Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.583566 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" containerID="cri-o://1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" gracePeriod=30 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.584338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" containerID="cri-o://1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" gracePeriod=30 Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.661794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.771964 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.772017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.772164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") pod \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\" (UID: \"29aae534-5c23-4125-a6c1-57b4bd7a2a4c\") " Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.777587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst" (OuterVolumeSpecName: "kube-api-access-chvst") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "kube-api-access-chvst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.831679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.837148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.837968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.862060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.867915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config" (OuterVolumeSpecName: "config") pod "29aae534-5c23-4125-a6c1-57b4bd7a2a4c" (UID: "29aae534-5c23-4125-a6c1-57b4bd7a2a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875097 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875150 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875165 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chvst\" (UniqueName: \"kubernetes.io/projected/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-kube-api-access-chvst\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875185 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875203 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.875216 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29aae534-5c23-4125-a6c1-57b4bd7a2a4c-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:34 crc kubenswrapper[4820]: W0221 07:06:34.982422 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7d6374d_1595_4586_b161_d199a2b39068.slice/crio-34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb WatchSource:0}: Error finding container 34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb: Status 404 returned error can't find the container with id 34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb Feb 21 07:06:34 crc kubenswrapper[4820]: I0221 07:06:34.986612 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.596870 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.596869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7489f6876c-2n9gl" event={"ID":"29aae534-5c23-4125-a6c1-57b4bd7a2a4c","Type":"ContainerDied","Data":"f12c1a8e0db096347f19d2697b9e9331aac42f90a3217a3038a39188f188a441"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.598087 4820 scope.go:117] "RemoveContainer" containerID="d21e5362f3bdef1222d983791df13fcb26aee43c220da6058c8541e05112d6b5" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.598509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7d6374d-1595-4586-b161-d199a2b39068","Type":"ContainerStarted","Data":"34c194c9fe818035a6ace9db4a70ccfc491278d779d52eb481f0eb40fdc2f9cb"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.606790 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d7b1660-2001-4122-9369-97c629938e58" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" exitCode=0 Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.606833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.621846 4820 scope.go:117] "RemoveContainer" containerID="6e603615eb6f8aebb5fc0a7934eddaf580b840ae971a07039f0c0c6049a9ef38" Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.638162 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.648556 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7489f6876c-2n9gl"] Feb 21 07:06:35 crc kubenswrapper[4820]: I0221 07:06:35.716396 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" path="/var/lib/kubelet/pods/29aae534-5c23-4125-a6c1-57b4bd7a2a4c/volumes" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.405736 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:37 crc kubenswrapper[4820]: E0221 07:06:37.409617 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409650 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: E0221 07:06:37.409674 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="init" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="init" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.409954 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="29aae534-5c23-4125-a6c1-57b4bd7a2a4c" containerName="dnsmasq-dns" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.411081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.414375 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.414539 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.416922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.425746 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521213 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.521997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.623652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.624046 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.624211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.630439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.631941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.632918 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.637631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.639852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.641122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"swift-proxy-cffb45b79-w6bp8\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:37 crc kubenswrapper[4820]: I0221 07:06:37.774146 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.338731 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.353851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499181 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499267 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499549 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") pod \"8d7b1660-2001-4122-9369-97c629938e58\" (UID: \"8d7b1660-2001-4122-9369-97c629938e58\") " Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.499958 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d7b1660-2001-4122-9369-97c629938e58-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.504829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.505464 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4" (OuterVolumeSpecName: "kube-api-access-8f7p4") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "kube-api-access-8f7p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.511136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts" (OuterVolumeSpecName: "scripts") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.531671 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.531931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" containerID="cri-o://792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" containerID="cri-o://0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532113 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" containerID="cri-o://e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.532147 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" containerID="cri-o://1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" gracePeriod=30 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.540349 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602412 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f7p4\" (UniqueName: \"kubernetes.io/projected/8d7b1660-2001-4122-9369-97c629938e58-kube-api-access-8f7p4\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602440 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.602449 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.689138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"799aa64333911f7111f98ffff76ee1c66aebdf83eeaa6dc6c45e5389c74e915a"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690702 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d7b1660-2001-4122-9369-97c629938e58" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" exitCode=0 Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d7b1660-2001-4122-9369-97c629938e58","Type":"ContainerDied","Data":"ff64435a47c0297e2732c2e77200493a270636c1ffcd894d5737019411ddb58e"} Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690772 4820 scope.go:117] "RemoveContainer" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.690884 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.708356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.745357 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data" (OuterVolumeSpecName: "config-data") pod "8d7b1660-2001-4122-9369-97c629938e58" (UID: "8d7b1660-2001-4122-9369-97c629938e58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.805930 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.805959 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7b1660-2001-4122-9369-97c629938e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.829517 4820 scope.go:117] "RemoveContainer" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.862882 4820 scope.go:117] "RemoveContainer" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: E0221 07:06:38.863511 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": container with ID starting with 1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77 not found: ID does not exist" containerID="1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.863620 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77"} err="failed to get container status \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": rpc error: code = NotFound desc = could not find container \"1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77\": container with ID starting with 1974c01221acdbf7cadd7f8ec44eac19fcbabbd9fa2113c2558b79edfceb1f77 not found: ID does not exist" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.863656 4820 scope.go:117] "RemoveContainer" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: E0221 07:06:38.864251 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": container with ID starting with 1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef not found: ID does not exist" containerID="1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef" Feb 21 07:06:38 crc kubenswrapper[4820]: I0221 07:06:38.864293 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef"} err="failed to get container status \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": rpc error: code = NotFound desc = could not find container \"1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef\": container with ID starting with 1a2cc91053bb23b9532f8d68dea387f7c9cbccfe017244cc91d6d168e5cbbeef not found: ID does not exist" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.031461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.036900 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058161 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: E0221 07:06:39.058591 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058608 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: E0221 07:06:39.058620 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058630 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="cinder-scheduler" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.058815 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7b1660-2001-4122-9369-97c629938e58" containerName="probe" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.059753 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.061873 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.083674 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.110952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111216 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.111371 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213267 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213469 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.213515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.218411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.218417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.220843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.225825 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.233418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"cinder-scheduler-0\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.392888 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.711139 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7b1660-2001-4122-9369-97c629938e58" path="/var/lib/kubelet/pods/8d7b1660-2001-4122-9369-97c629938e58/volumes" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerStarted","Data":"a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720514 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.720608 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737834 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737879 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" exitCode=2 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737887 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737894 4820 generic.go:334] "Generic (PLEG): container finished" podID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerID="792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" exitCode=0 Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737966 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.737975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15"} Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.756616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cffb45b79-w6bp8" podStartSLOduration=2.756590799 podStartE2EDuration="2.756590799s" podCreationTimestamp="2026-02-21 07:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:39.739592954 +0000 UTC m=+1174.772677162" watchObservedRunningTime="2026-02-21 07:06:39.756590799 +0000 UTC m=+1174.789674997" Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.853548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:06:39 crc kubenswrapper[4820]: W0221 07:06:39.863960 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode533e163_2ccc_4468_9083_c9bf711b0dfb.slice/crio-26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a WatchSource:0}: Error finding container 26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a: Status 404 returned error can't find the container with id 26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a Feb 21 07:06:39 crc kubenswrapper[4820]: I0221 07:06:39.961606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032157 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032368 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.032581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") pod \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\" (UID: \"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c\") " Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.033549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.034711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.038214 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts" (OuterVolumeSpecName: "scripts") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.068541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7" (OuterVolumeSpecName: "kube-api-access-wfpp7") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "kube-api-access-wfpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.117393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138107 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfpp7\" (UniqueName: \"kubernetes.io/projected/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-kube-api-access-wfpp7\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138147 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138159 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138169 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.138180 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.182570 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.212416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data" (OuterVolumeSpecName: "config-data") pod "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" (UID: "695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.245618 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.245661 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.747636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a"} Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c","Type":"ContainerDied","Data":"391e3a8821e7a5d4d540410a63bf4ea889c64567ec635528d2b32100b2356ede"} Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761607 4820 scope.go:117] "RemoveContainer" containerID="0f4fef91c2862a4646b8bf634066a73a2e52c555f67d14352b7e17152204700f" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.761549 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.782376 4820 scope.go:117] "RemoveContainer" containerID="e50fb2c2d5d2058a45ddf6cac5b63dce70dcdc05810f14b1050c0f42254a6e6a" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.810968 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.820314 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.831524 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832110 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832143 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832157 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832164 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832185 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832192 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: E0221 07:06:40.832213 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832219 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832397 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-central-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832410 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="proxy-httpd" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832423 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="ceilometer-notification-agent" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.832434 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" containerName="sg-core" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.834139 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.837837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.838790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.838790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.959739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.959953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960451 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:40 crc kubenswrapper[4820]: I0221 07:06:40.960499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.062857 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.063528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.064668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.087440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.087945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.088506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.088748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.090398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.095180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"ceilometer-0\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.155357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.710442 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c" path="/var/lib/kubelet/pods/695f1ef3-c220-4002-ae7a-ebd6c5c1bc0c/volumes" Feb 21 07:06:41 crc kubenswrapper[4820]: I0221 07:06:41.773741 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} Feb 21 07:06:42 crc kubenswrapper[4820]: I0221 07:06:42.575375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.395880 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.404038 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.533941 4820 scope.go:117] "RemoveContainer" containerID="1bf9a5312dc663d5ff01578445253ee3d622d5c37d73b234651e285d5db084fb" Feb 21 07:06:46 crc kubenswrapper[4820]: I0221 07:06:46.612903 4820 scope.go:117] "RemoveContainer" containerID="792041cdd1d49253730bea81a9e8b7c6b65cdd0b5a588d9dbbddee7a05d92e15" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.130819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.783671 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.785691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.852026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d7d6374d-1595-4586-b161-d199a2b39068","Type":"ContainerStarted","Data":"909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.855358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerStarted","Data":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.857481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"c6e4c61f560fdc36ef8818a932ad9b4e68979f45ec64327ab6006d30f510ba75"} Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.892350 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.220524709 podStartE2EDuration="13.892334051s" podCreationTimestamp="2026-02-21 07:06:34 +0000 UTC" firstStartedPulling="2026-02-21 07:06:34.985518815 +0000 UTC m=+1170.018603013" lastFinishedPulling="2026-02-21 07:06:46.657328157 +0000 UTC m=+1181.690412355" observedRunningTime="2026-02-21 07:06:47.880468297 +0000 UTC m=+1182.913552495" watchObservedRunningTime="2026-02-21 07:06:47.892334051 +0000 UTC m=+1182.925418249" Feb 21 07:06:47 crc kubenswrapper[4820]: I0221 07:06:47.915015 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.91499735 podStartE2EDuration="8.91499735s" podCreationTimestamp="2026-02-21 07:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:47.909719995 +0000 UTC m=+1182.942804193" watchObservedRunningTime="2026-02-21 07:06:47.91499735 +0000 UTC m=+1182.948081548" Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.859478 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.868555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} Feb 21 07:06:48 crc kubenswrapper[4820]: I0221 07:06:48.868597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.393279 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.538882 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.539167 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" containerID="cri-o://3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" gracePeriod=30 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.539283 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" containerID="cri-o://94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" gracePeriod=30 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.672949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.879070 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerID="3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" exitCode=143 Feb 21 07:06:49 crc kubenswrapper[4820]: I0221 07:06:49.880484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48"} Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481222 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481470 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" containerID="cri-o://f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" gracePeriod=30 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.481543 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" containerID="cri-o://f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" gracePeriod=30 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.888331 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerID="f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" exitCode=143 Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.888446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4"} Feb 21 07:06:50 crc kubenswrapper[4820]: I0221 07:06:50.891589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.672402 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:45738->10.217.0.151:9292: read: connection reset by peer" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.672931 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:45722->10.217.0.151:9292: read: connection reset by peer" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.916792 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerID="94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" exitCode=0 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.916861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.919811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerStarted","Data":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.919975 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" containerID="cri-o://0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" containerID="cri-o://4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920131 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" containerID="cri-o://d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920175 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" containerID="cri-o://3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" gracePeriod=30 Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.920289 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:06:52 crc kubenswrapper[4820]: I0221 07:06:52.942555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.298944974 podStartE2EDuration="12.942534217s" podCreationTimestamp="2026-02-21 07:06:40 +0000 UTC" firstStartedPulling="2026-02-21 07:06:47.141442016 +0000 UTC m=+1182.174526214" lastFinishedPulling="2026-02-21 07:06:51.785031259 +0000 UTC m=+1186.818115457" observedRunningTime="2026-02-21 07:06:52.940307166 +0000 UTC m=+1187.973391364" watchObservedRunningTime="2026-02-21 07:06:52.942534217 +0000 UTC m=+1187.975618415" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.273126 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.274434 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.288134 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.348267 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.353224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.377131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.449260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.449366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.453265 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.454604 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.466148 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.474325 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.492034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.493964 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.508917 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.546440 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550584 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.550776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.551810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.583179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"nova-api-db-create-pjnhh\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.602410 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.652948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653181 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653433 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") pod \"5c400cc2-a2a1-4204-8300-2b2420ab825e\" (UID: \"5c400cc2-a2a1-4204-8300-2b2420ab825e\") " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.653872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.654078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.654478 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs" (OuterVolumeSpecName: "logs") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.655273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.655696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.660019 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.660978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts" (OuterVolumeSpecName: "scripts") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.664294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z" (OuterVolumeSpecName: "kube-api-access-6kv8z") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "kube-api-access-6kv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.668676 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:53 crc kubenswrapper[4820]: E0221 07:06:53.682342 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682371 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: E0221 07:06:53.682388 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682555 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682747 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-httpd" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.682755 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" containerName="glance-log" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.683203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.683291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.684984 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.688802 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:60308->10.217.0.152:9292: read: connection reset by peer" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.688814 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:60316->10.217.0.152:9292: read: connection reset by peer" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.692111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"nova-cell0-db-create-b68n2\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.703062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.757835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759741 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759752 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c400cc2-a2a1-4204-8300-2b2420ab825e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759760 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759769 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759780 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/5c400cc2-a2a1-4204-8300-2b2420ab825e-kube-api-access-6kv8z\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.759798 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.760148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.761098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.765388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data" (OuterVolumeSpecName: "config-data") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.769388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c400cc2-a2a1-4204-8300-2b2420ab825e" (UID: "5c400cc2-a2a1-4204-8300-2b2420ab825e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.781620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"nova-cell1-db-create-vdzvw\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.782629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"nova-api-a80b-account-create-update-n9j8x\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.801917 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.827829 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.856142 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863298 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863418 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863436 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.863446 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c400cc2-a2a1-4204-8300-2b2420ab825e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.916777 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.931767 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.938726 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.963123 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.964812 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.964883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.967713 4820 generic.go:334] "Generic (PLEG): container finished" podID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerID="f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.967851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.969755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.975115 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979752 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979832 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" exitCode=2 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979844 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" exitCode=0 Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.979954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.983607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"nova-cell0-6ecb-account-create-update-q98t2\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c400cc2-a2a1-4204-8300-2b2420ab825e","Type":"ContainerDied","Data":"55817b22512b4f79b05a91fa0314cc7452c7e5542175c8a9531d82ddc3a3f526"} Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984754 4820 scope.go:117] "RemoveContainer" containerID="94f8cea32bfbe2dcb3dc478f2ac9ab5b9c23f557b5defcc5e3d635872a87fe5e" Feb 21 07:06:53 crc kubenswrapper[4820]: I0221 07:06:53.984782 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.004804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.027749 4820 scope.go:117] "RemoveContainer" containerID="3e1ff2dd763154f63b65dd4be9fe5f5bcd513f4150395e54156c56ea74a4fb48" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.048476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.063700 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.068354 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.068493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.080510 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.082000 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.085556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.086309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.086823 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.117676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.169961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.170071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.171075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.220099 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"nova-cell1-96c7-account-create-update-fhgrk\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271503 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271528 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271694 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.271732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.277002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.371756 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375564 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.375706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.377253 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.377772 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.378386 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.387308 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.388260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.394205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.415441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.430683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.434860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.478703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.478997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479090 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.479673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") pod \"1f5a553e-c548-455a-83e2-87f8f71f3067\" (UID: \"1f5a553e-c548-455a-83e2-87f8f71f3067\") " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.481126 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.481344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs" (OuterVolumeSpecName: "logs") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.485416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts" (OuterVolumeSpecName: "scripts") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.487398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h" (OuterVolumeSpecName: "kube-api-access-zmd5h") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "kube-api-access-zmd5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.489455 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.529975 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.547881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data" (OuterVolumeSpecName: "config-data") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.554077 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f5a553e-c548-455a-83e2-87f8f71f3067" (UID: "1f5a553e-c548-455a-83e2-87f8f71f3067"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.582848 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583248 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmd5h\" (UniqueName: \"kubernetes.io/projected/1f5a553e-c548-455a-83e2-87f8f71f3067-kube-api-access-zmd5h\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583408 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583622 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f5a553e-c548-455a-83e2-87f8f71f3067-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583701 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583872 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.583987 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f5a553e-c548-455a-83e2-87f8f71f3067-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.584084 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.614326 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.641076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.668971 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.692458 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.750967 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.793711 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:06:54 crc kubenswrapper[4820]: I0221 07:06:54.894414 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f5a553e-c548-455a-83e2-87f8f71f3067","Type":"ContainerDied","Data":"52abf5a2098d07a4a0de7b8077842d862d702555a35f0737cfb50e48aa1ad9fd"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018628 4820 scope.go:117] "RemoveContainer" containerID="f07c17454301badcf8ab4771e95e8220dd709e96e43e5e64fa93a0170de14464" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.018754 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.026878 4820 generic.go:334] "Generic (PLEG): container finished" podID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerID="54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec" exitCode=0 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.026999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerDied","Data":"54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.027032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerStarted","Data":"8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.032334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerStarted","Data":"7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.036780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerStarted","Data":"c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.046680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerStarted","Data":"3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.052268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerStarted","Data":"be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad"} Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.054630 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.100259 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.175591 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.175995 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7777947948-b8bjv" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" containerID="cri-o://336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" gracePeriod=30 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.176069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7777947948-b8bjv" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" containerID="cri-o://47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" gracePeriod=30 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.225548 4820 scope.go:117] "RemoveContainer" containerID="f1065ea92f9064f45c2733a25acd9f61b2299b2724994ced2d00c91a6cdebca4" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.240903 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.282294 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.307285 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: E0221 07:06:55.307992 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: E0221 07:06:55.308015 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308022 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308356 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-log" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.308382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" containerName="glance-httpd" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.310667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.313861 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.314078 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.324807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: W0221 07:06:55.360889 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef3827c2_ee55_4f86_a752_d7cbc9c6454e.slice/crio-7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9 WatchSource:0}: Error finding container 7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9: Status 404 returned error can't find the container with id 7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9 Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.375332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.553741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673531 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.673752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.674172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.687968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.692961 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.700437 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.721112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.723220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.725744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.726590 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.727012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.730558 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5a553e-c548-455a-83e2-87f8f71f3067" path="/var/lib/kubelet/pods/1f5a553e-c548-455a-83e2-87f8f71f3067/volumes" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.731532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c400cc2-a2a1-4204-8300-2b2420ab825e" path="/var/lib/kubelet/pods/5c400cc2-a2a1-4204-8300-2b2420ab825e/volumes" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.757667 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774622 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774720 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774814 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.774899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") pod \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\" (UID: \"aa5aec23-74ee-4fc2-9fac-6039b558ec3d\") " Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.778166 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.778563 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.780639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l" (OuterVolumeSpecName: "kube-api-access-wsc2l") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "kube-api-access-wsc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.797224 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts" (OuterVolumeSpecName: "scripts") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.869725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880708 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880750 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsc2l\" (UniqueName: \"kubernetes.io/projected/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-kube-api-access-wsc2l\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880763 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880774 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.880785 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.936695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.958636 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data" (OuterVolumeSpecName: "config-data") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.974184 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa5aec23-74ee-4fc2-9fac-6039b558ec3d" (UID: "aa5aec23-74ee-4fc2-9fac-6039b558ec3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.983443 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:55 crc kubenswrapper[4820]: I0221 07:06:55.983474 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5aec23-74ee-4fc2-9fac-6039b558ec3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.086715 4820 generic.go:334] "Generic (PLEG): container finished" podID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087185 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa5aec23-74ee-4fc2-9fac-6039b558ec3d","Type":"ContainerDied","Data":"c6e4c61f560fdc36ef8818a932ad9b4e68979f45ec64327ab6006d30f510ba75"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.087516 4820 scope.go:117] "RemoveContainer" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.100889 4820 generic.go:334] "Generic (PLEG): container finished" podID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerID="826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.100982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerDied","Data":"826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.108657 4820 generic.go:334] "Generic (PLEG): container finished" podID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerID="8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.108852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerDied","Data":"8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114804 4820 generic.go:334] "Generic (PLEG): container finished" podID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerID="bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerDied","Data":"bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.114927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerStarted","Data":"d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.148347 4820 scope.go:117] "RemoveContainer" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.154180 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerID="47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.154230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.155421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.156565 4820 generic.go:334] "Generic (PLEG): container finished" podID="324a15c6-a903-420b-8db4-4268008c83d1" containerID="0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.156655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerDied","Data":"0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.158443 4820 generic.go:334] "Generic (PLEG): container finished" podID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerID="fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e" exitCode=0 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.158692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerDied","Data":"fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e"} Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.184021 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.201533 4820 scope.go:117] "RemoveContainer" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.202801 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217454 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217894 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217906 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217915 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217922 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217938 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217944 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.217965 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.217970 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218136 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-central-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218149 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="sg-core" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218157 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="ceilometer-notification-agent" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.218167 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" containerName="proxy-httpd" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.219689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.221477 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.223476 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.278666 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.287382 4820 scope.go:117] "RemoveContainer" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.328387 4820 scope.go:117] "RemoveContainer" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.329865 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": container with ID starting with 4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf not found: ID does not exist" containerID="4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.329897 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf"} err="failed to get container status \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": rpc error: code = NotFound desc = could not find container \"4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf\": container with ID starting with 4c89d26c9a51f7184a1be4a705f5b7f7627a0e5ed2f5076cd76a6dcb6eb86cdf not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.329921 4820 scope.go:117] "RemoveContainer" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.332114 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": container with ID starting with d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418 not found: ID does not exist" containerID="d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.332158 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418"} err="failed to get container status \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": rpc error: code = NotFound desc = could not find container \"d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418\": container with ID starting with d3b65f3e81048f7a2b17c9782bb998545d868af8062e8e70a2d2352fa0973418 not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.332182 4820 scope.go:117] "RemoveContainer" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.344323 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": container with ID starting with 3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0 not found: ID does not exist" containerID="3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.344547 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0"} err="failed to get container status \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": rpc error: code = NotFound desc = could not find container \"3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0\": container with ID starting with 3fa9b47a5a7196e62a1b994413af67131607c91433e8f620e52ed9e155a01ae0 not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.344574 4820 scope.go:117] "RemoveContainer" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: E0221 07:06:56.355931 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": container with ID starting with 0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b not found: ID does not exist" containerID="0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.355966 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b"} err="failed to get container status \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": rpc error: code = NotFound desc = could not find container \"0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b\": container with ID starting with 0b41a3aba521cb0b4a1f07da7541830680d8ca7a055e36e4dd18920f732cd27b not found: ID does not exist" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390554 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390617 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.390716 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496185 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496213 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.496402 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.497138 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.497169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.500349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.500562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.501837 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508312 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.508521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.526726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"ceilometer-0\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.572038 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.620165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:06:56 crc kubenswrapper[4820]: W0221 07:06:56.640598 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a9bb0a5_0caa_4137_b448_a2b55d9be1ff.slice/crio-b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4 WatchSource:0}: Error finding container b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4: Status 404 returned error can't find the container with id b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4 Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.644159 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.808930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") pod \"1fa19e90-7854-4eb9-9b72-26c8d0739851\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.810374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") pod \"1fa19e90-7854-4eb9-9b72-26c8d0739851\" (UID: \"1fa19e90-7854-4eb9-9b72-26c8d0739851\") " Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.814046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fa19e90-7854-4eb9-9b72-26c8d0739851" (UID: "1fa19e90-7854-4eb9-9b72-26c8d0739851"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.815545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh" (OuterVolumeSpecName: "kube-api-access-t2snh") pod "1fa19e90-7854-4eb9-9b72-26c8d0739851" (UID: "1fa19e90-7854-4eb9-9b72-26c8d0739851"). InnerVolumeSpecName "kube-api-access-t2snh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.903639 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85dd5db455-fl7mt" podUID="a319e7a0-81bf-4952-80fe-9c8b4cbd3f47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: i/o timeout" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.912871 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa19e90-7854-4eb9-9b72-26c8d0739851-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:56 crc kubenswrapper[4820]: I0221 07:06:56.912911 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2snh\" (UniqueName: \"kubernetes.io/projected/1fa19e90-7854-4eb9-9b72-26c8d0739851-kube-api-access-t2snh\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.160748 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:57 crc kubenswrapper[4820]: W0221 07:06:57.163123 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26b2ff3_30ed_493c_a041_e23ebe440501.slice/crio-602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff WatchSource:0}: Error finding container 602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff: Status 404 returned error can't find the container with id 602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.185704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pjnhh" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.186389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pjnhh" event={"ID":"1fa19e90-7854-4eb9-9b72-26c8d0739851","Type":"ContainerDied","Data":"8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.186431 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcf3b62b09cc7c5fb997c8802705a5d6f14b9b14b1e93d39ca843241e67ca24" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.206400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.206460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerStarted","Data":"d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.208991 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4"} Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.246420 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.246389001 podStartE2EDuration="3.246389001s" podCreationTimestamp="2026-02-21 07:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:57.233491439 +0000 UTC m=+1192.266575647" watchObservedRunningTime="2026-02-21 07:06:57.246389001 +0000 UTC m=+1192.279473219" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.630217 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.710183 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5aec23-74ee-4fc2-9fac-6039b558ec3d" path="/var/lib/kubelet/pods/aa5aec23-74ee-4fc2-9fac-6039b558ec3d/volumes" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.728391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") pod \"324a15c6-a903-420b-8db4-4268008c83d1\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.728557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") pod \"324a15c6-a903-420b-8db4-4268008c83d1\" (UID: \"324a15c6-a903-420b-8db4-4268008c83d1\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.731011 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "324a15c6-a903-420b-8db4-4268008c83d1" (UID: "324a15c6-a903-420b-8db4-4268008c83d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.734554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v" (OuterVolumeSpecName: "kube-api-access-9pd8v") pod "324a15c6-a903-420b-8db4-4268008c83d1" (UID: "324a15c6-a903-420b-8db4-4268008c83d1"). InnerVolumeSpecName "kube-api-access-9pd8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.832393 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/324a15c6-a903-420b-8db4-4268008c83d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.832422 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pd8v\" (UniqueName: \"kubernetes.io/projected/324a15c6-a903-420b-8db4-4268008c83d1-kube-api-access-9pd8v\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.850846 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.864182 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.900540 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.933058 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") pod \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.933369 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") pod \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\" (UID: \"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5\") " Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.934093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" (UID: "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.934119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:57 crc kubenswrapper[4820]: I0221 07:06:57.937143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h" (OuterVolumeSpecName: "kube-api-access-mbz8h") pod "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" (UID: "d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5"). InnerVolumeSpecName "kube-api-access-mbz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.034776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") pod \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") pod \"e610e477-7d95-4af5-be48-f8a9acd81d6a\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") pod \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035530 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") pod \"e610e477-7d95-4af5-be48-f8a9acd81d6a\" (UID: \"e610e477-7d95-4af5-be48-f8a9acd81d6a\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035570 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") pod \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\" (UID: \"e27134bb-c9b2-42d4-bad5-81e7b05874e7\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.035642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") pod \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\" (UID: \"bbe51cee-e461-4a5f-86d9-0eb600da3a82\") " Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036074 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036089 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbz8h\" (UniqueName: \"kubernetes.io/projected/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5-kube-api-access-mbz8h\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e27134bb-c9b2-42d4-bad5-81e7b05874e7" (UID: "e27134bb-c9b2-42d4-bad5-81e7b05874e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e610e477-7d95-4af5-be48-f8a9acd81d6a" (UID: "e610e477-7d95-4af5-be48-f8a9acd81d6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.036590 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbe51cee-e461-4a5f-86d9-0eb600da3a82" (UID: "bbe51cee-e461-4a5f-86d9-0eb600da3a82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.039611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg" (OuterVolumeSpecName: "kube-api-access-4pncg") pod "e610e477-7d95-4af5-be48-f8a9acd81d6a" (UID: "e610e477-7d95-4af5-be48-f8a9acd81d6a"). InnerVolumeSpecName "kube-api-access-4pncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.043652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s" (OuterVolumeSpecName: "kube-api-access-5nw9s") pod "bbe51cee-e461-4a5f-86d9-0eb600da3a82" (UID: "bbe51cee-e461-4a5f-86d9-0eb600da3a82"). InnerVolumeSpecName "kube-api-access-5nw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.043697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6" (OuterVolumeSpecName: "kube-api-access-mwkj6") pod "e27134bb-c9b2-42d4-bad5-81e7b05874e7" (UID: "e27134bb-c9b2-42d4-bad5-81e7b05874e7"). InnerVolumeSpecName "kube-api-access-mwkj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138683 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e610e477-7d95-4af5-be48-f8a9acd81d6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138722 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27134bb-c9b2-42d4-bad5-81e7b05874e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138737 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbe51cee-e461-4a5f-86d9-0eb600da3a82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138751 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nw9s\" (UniqueName: \"kubernetes.io/projected/bbe51cee-e461-4a5f-86d9-0eb600da3a82-kube-api-access-5nw9s\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138764 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pncg\" (UniqueName: \"kubernetes.io/projected/e610e477-7d95-4af5-be48-f8a9acd81d6a-kube-api-access-4pncg\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.138775 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwkj6\" (UniqueName: \"kubernetes.io/projected/e27134bb-c9b2-42d4-bad5-81e7b05874e7-kube-api-access-mwkj6\") on node \"crc\" DevicePath \"\"" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.219820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.222430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.222471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vdzvw" event={"ID":"324a15c6-a903-420b-8db4-4268008c83d1","Type":"ContainerDied","Data":"7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223624 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae594b8acd25b250e0b397c453bfccd82d4cdfe17cc49f7535da3a8a40fcc1f" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.223675 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vdzvw" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.229349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-n9j8x" event={"ID":"e27134bb-c9b2-42d4-bad5-81e7b05874e7","Type":"ContainerDied","Data":"c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.229393 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62d1b598ca12ca3ef447b230a957a6ca222b2abcd68ccdf032833cfe33c6549" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.230318 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-n9j8x" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.232823 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b68n2" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.233431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b68n2" event={"ID":"e610e477-7d95-4af5-be48-f8a9acd81d6a","Type":"ContainerDied","Data":"3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.233461 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3387d4191f1769cf4932444349d1da8e3c1840dbe23238ce666e4230b0ce3e70" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237419 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" event={"ID":"bbe51cee-e461-4a5f-86d9-0eb600da3a82","Type":"ContainerDied","Data":"be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237516 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be924def0b2a9b3a4222f8343b7d95f0374d522834a949670480b5db5a155cad" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.237569 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ecb-account-create-update-q98t2" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.241684 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.247508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-96c7-account-create-update-fhgrk" event={"ID":"d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5","Type":"ContainerDied","Data":"d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2"} Feb 21 07:06:58 crc kubenswrapper[4820]: I0221 07:06:58.247621 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0463a86f850111d2b19d6b506160ff9ee874e80ebcf93e0f2794300be9175a2" Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.118581 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.267282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerStarted","Data":"c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.271464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.271518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585"} Feb 21 07:06:59 crc kubenswrapper[4820]: I0221 07:06:59.304694 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.304672676 podStartE2EDuration="4.304672676s" podCreationTimestamp="2026-02-21 07:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:06:59.291962319 +0000 UTC m=+1194.325046527" watchObservedRunningTime="2026-02-21 07:06:59.304672676 +0000 UTC m=+1194.337756874" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.284666 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerID="336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" exitCode=0 Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.284913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b"} Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.408140 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493953 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.493984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.494081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.494132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") pod \"5cfa00dc-af93-49c8-ac1b-67cea9851389\" (UID: \"5cfa00dc-af93-49c8-ac1b-67cea9851389\") " Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.499387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4" (OuterVolumeSpecName: "kube-api-access-zjdn4") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "kube-api-access-zjdn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.500426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.559626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config" (OuterVolumeSpecName: "config") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.568796 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.587782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5cfa00dc-af93-49c8-ac1b-67cea9851389" (UID: "5cfa00dc-af93-49c8-ac1b-67cea9851389"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595781 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjdn4\" (UniqueName: \"kubernetes.io/projected/5cfa00dc-af93-49c8-ac1b-67cea9851389-kube-api-access-zjdn4\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595818 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595829 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595837 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:00 crc kubenswrapper[4820]: I0221 07:07:00.595849 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfa00dc-af93-49c8-ac1b-67cea9851389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.295616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7777947948-b8bjv" event={"ID":"5cfa00dc-af93-49c8-ac1b-67cea9851389","Type":"ContainerDied","Data":"3e28ba467d144d224a1ff3d02bb67eaf401e7d86630f2424dc064e42e81ffa60"} Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.295922 4820 scope.go:117] "RemoveContainer" containerID="47540e3342615d58fd4f14384685d36d1d488276912b091d77e02f8d31604449" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.296076 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7777947948-b8bjv" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.303644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerStarted","Data":"a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5"} Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.303969 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" containerID="cri-o://7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304082 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" containerID="cri-o://a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304164 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" containerID="cri-o://67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304219 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" containerID="cri-o://57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" gracePeriod=30 Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.304953 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.329087 4820 scope.go:117] "RemoveContainer" containerID="336d7e018fc3ba9ca31cabbde804230c2c9a2a352511b16336cc0f2ad7e63c2b" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.340704 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.089647719 podStartE2EDuration="5.340687295s" podCreationTimestamp="2026-02-21 07:06:56 +0000 UTC" firstStartedPulling="2026-02-21 07:06:57.172610207 +0000 UTC m=+1192.205694405" lastFinishedPulling="2026-02-21 07:07:00.423649783 +0000 UTC m=+1195.456733981" observedRunningTime="2026-02-21 07:07:01.334735961 +0000 UTC m=+1196.367820179" watchObservedRunningTime="2026-02-21 07:07:01.340687295 +0000 UTC m=+1196.373771493" Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.380756 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.391834 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7777947948-b8bjv"] Feb 21 07:07:01 crc kubenswrapper[4820]: I0221 07:07:01.708437 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" path="/var/lib/kubelet/pods/5cfa00dc-af93-49c8-ac1b-67cea9851389/volumes" Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.317037 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" exitCode=0 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318000 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" exitCode=2 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318098 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" exitCode=0 Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.317247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5"} Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa"} Feb 21 07:07:02 crc kubenswrapper[4820]: I0221 07:07:02.318403 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585"} Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.936500 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937323 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937341 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937358 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937370 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937377 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937395 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937403 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937415 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937423 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937441 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937448 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937466 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: E0221 07:07:03.937485 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937492 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937704 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937725 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937734 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-httpd" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937744 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="324a15c6-a903-420b-8db4-4268008c83d1" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa00dc-af93-49c8-ac1b-67cea9851389" containerName="neutron-api" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937770 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" containerName="mariadb-account-create-update" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.937779 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" containerName="mariadb-database-create" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.938607 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.940790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4z72b" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.942179 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.951031 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:03 crc kubenswrapper[4820]: I0221 07:07:03.957560 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061207 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.061266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.163335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.169042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.169678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.172728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.178924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"nova-cell0-conductor-db-sync-bd4bz\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.261545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.642031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.642384 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.729311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.744989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 07:07:04 crc kubenswrapper[4820]: I0221 07:07:04.760569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerStarted","Data":"460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92"} Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369664 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.369809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.938044 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.938440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.973808 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:05 crc kubenswrapper[4820]: I0221 07:07:05.986879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.385629 4820 generic.go:334] "Generic (PLEG): container finished" podID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerID="7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" exitCode=0 Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.385667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492"} Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.386543 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.386790 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.598784 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.723846 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.723899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724134 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.724160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") pod \"f26b2ff3-30ed-493c-a041-e23ebe440501\" (UID: \"f26b2ff3-30ed-493c-a041-e23ebe440501\") " Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.725384 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.725980 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.729953 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts" (OuterVolumeSpecName: "scripts") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.739583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4" (OuterVolumeSpecName: "kube-api-access-9k7x4") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "kube-api-access-9k7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.776591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.804994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828712 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828870 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k7x4\" (UniqueName: \"kubernetes.io/projected/f26b2ff3-30ed-493c-a041-e23ebe440501-kube-api-access-9k7x4\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.828964 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829045 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829112 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.829177 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f26b2ff3-30ed-493c-a041-e23ebe440501-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.842437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data" (OuterVolumeSpecName: "config-data") pod "f26b2ff3-30ed-493c-a041-e23ebe440501" (UID: "f26b2ff3-30ed-493c-a041-e23ebe440501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:06 crc kubenswrapper[4820]: I0221 07:07:06.931504 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b2ff3-30ed-493c-a041-e23ebe440501-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397515 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397855 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f26b2ff3-30ed-493c-a041-e23ebe440501","Type":"ContainerDied","Data":"602e0f7ba4052c49a23a9fff17d69d77cdcec617d35ca3267b743f85867d48ff"} Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.398776 4820 scope.go:117] "RemoveContainer" containerID="a1ee8699a76a0017afe6e7ed62b20189fd21ac0e6b14a53a848ef5bc27c620d5" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.397756 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.401628 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.454185 4820 scope.go:117] "RemoveContainer" containerID="67805b4f532b6b09b25e3e5122cd2ee5e37f1c1848e63607e7c052fcb1189faa" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.463740 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.473965 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.479315 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.487572 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.487968 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.487985 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488005 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488013 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488043 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488052 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: E0221 07:07:07.488064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488070 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488306 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="sg-core" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488323 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-central-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488338 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="proxy-httpd" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.488357 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" containerName="ceilometer-notification-agent" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.490132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.492892 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.493333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.504373 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.528898 4820 scope.go:117] "RemoveContainer" containerID="57acf0d445c45201c28b7e61b71fd9e047cbc8a5dd4b6966f3be185a4c61d585" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543713 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543788 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.543971 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.551392 4820 scope.go:117] "RemoveContainer" containerID="7999ee8773a4aa69576bcdac140bc3498de183ecb7a45046a7dab59909755492" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645766 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.645846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.646794 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.646860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.652725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.664056 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.664179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"ceilometer-0\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " pod="openstack/ceilometer-0" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.709442 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26b2ff3-30ed-493c-a041-e23ebe440501" path="/var/lib/kubelet/pods/f26b2ff3-30ed-493c-a041-e23ebe440501/volumes" Feb 21 07:07:07 crc kubenswrapper[4820]: I0221 07:07:07.822284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:08 crc kubenswrapper[4820]: I0221 07:07:08.301311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:08 crc kubenswrapper[4820]: I0221 07:07:08.304888 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 07:07:12 crc kubenswrapper[4820]: I0221 07:07:12.870670 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:12 crc kubenswrapper[4820]: W0221 07:07:12.875158 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31265e58_52ac_4a6c_86b2_ec212e0ed318.slice/crio-25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a WatchSource:0}: Error finding container 25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a: Status 404 returned error can't find the container with id 25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.460612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a"} Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.462782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerStarted","Data":"f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f"} Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.816638 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:07:13 crc kubenswrapper[4820]: I0221 07:07:13.816706 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.471511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53"} Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.471826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9"} Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.695698 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" podStartSLOduration=4.000802236 podStartE2EDuration="11.695677988s" podCreationTimestamp="2026-02-21 07:07:03 +0000 UTC" firstStartedPulling="2026-02-21 07:07:04.79053702 +0000 UTC m=+1199.823621218" lastFinishedPulling="2026-02-21 07:07:12.485412782 +0000 UTC m=+1207.518496970" observedRunningTime="2026-02-21 07:07:13.481619886 +0000 UTC m=+1208.514704084" watchObservedRunningTime="2026-02-21 07:07:14.695677988 +0000 UTC m=+1209.728762186" Feb 21 07:07:14 crc kubenswrapper[4820]: I0221 07:07:14.710750 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:15 crc kubenswrapper[4820]: I0221 07:07:15.481778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f"} Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.493909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerStarted","Data":"f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d"} Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494214 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" containerID="cri-o://fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494391 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" containerID="cri-o://f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" containerID="cri-o://400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.494483 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" containerID="cri-o://becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" gracePeriod=30 Feb 21 07:07:16 crc kubenswrapper[4820]: I0221 07:07:16.528550 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.225479462 podStartE2EDuration="9.528525178s" podCreationTimestamp="2026-02-21 07:07:07 +0000 UTC" firstStartedPulling="2026-02-21 07:07:12.879120794 +0000 UTC m=+1207.912204992" lastFinishedPulling="2026-02-21 07:07:16.18216651 +0000 UTC m=+1211.215250708" observedRunningTime="2026-02-21 07:07:16.520104908 +0000 UTC m=+1211.553189116" watchObservedRunningTime="2026-02-21 07:07:16.528525178 +0000 UTC m=+1211.561609376" Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.508710 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" exitCode=2 Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509041 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" exitCode=0 Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f"} Feb 21 07:07:17 crc kubenswrapper[4820]: I0221 07:07:17.509116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53"} Feb 21 07:07:20 crc kubenswrapper[4820]: I0221 07:07:20.534693 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" exitCode=0 Feb 21 07:07:20 crc kubenswrapper[4820]: I0221 07:07:20.534916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9"} Feb 21 07:07:22 crc kubenswrapper[4820]: I0221 07:07:22.551902 4820 generic.go:334] "Generic (PLEG): container finished" podID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerID="f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f" exitCode=0 Feb 21 07:07:22 crc kubenswrapper[4820]: I0221 07:07:22.551948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerDied","Data":"f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f"} Feb 21 07:07:23 crc kubenswrapper[4820]: I0221 07:07:23.896954 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.041319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") pod \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\" (UID: \"9f7e07b2-8561-41da-9c7f-ea5d80280d0a\") " Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.046960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn" (OuterVolumeSpecName: "kube-api-access-77dvn") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "kube-api-access-77dvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.048969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts" (OuterVolumeSpecName: "scripts") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.066598 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.068517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data" (OuterVolumeSpecName: "config-data") pod "9f7e07b2-8561-41da-9c7f-ea5d80280d0a" (UID: "9f7e07b2-8561-41da-9c7f-ea5d80280d0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144062 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144109 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144126 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.144139 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77dvn\" (UniqueName: \"kubernetes.io/projected/9f7e07b2-8561-41da-9c7f-ea5d80280d0a-kube-api-access-77dvn\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" event={"ID":"9f7e07b2-8561-41da-9c7f-ea5d80280d0a","Type":"ContainerDied","Data":"460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92"} Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571369 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460eb11279172258e3178108475a861968db26b641defdcf5011ebe38d54ec92" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.571431 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bd4bz" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683055 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:24 crc kubenswrapper[4820]: E0221 07:07:24.683477 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683490 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.683717 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" containerName="nova-cell0-conductor-db-sync" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.684313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.686424 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4z72b" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.686621 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.708831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.857918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.858045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.858071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.959460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.964890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.970379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:24 crc kubenswrapper[4820]: I0221 07:07:24.975910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"nova-cell0-conductor-0\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.003210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.436149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:07:25 crc kubenswrapper[4820]: I0221 07:07:25.583792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerStarted","Data":"b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c"} Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.609611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerStarted","Data":"498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae"} Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.612166 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:27 crc kubenswrapper[4820]: I0221 07:07:27.636052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.6360309600000003 podStartE2EDuration="3.63603096s" podCreationTimestamp="2026-02-21 07:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:27.633058429 +0000 UTC m=+1222.666142657" watchObservedRunningTime="2026-02-21 07:07:27.63603096 +0000 UTC m=+1222.669115178" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.027836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.455733 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.456982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.460526 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.461949 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.472295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.649658 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.652151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.654852 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.657258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658070 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658396 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.658454 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.665177 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759567 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.759627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.760290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.778976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.779332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.781829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.862911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.863008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.863095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.868813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.869160 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.871521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.878539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.888930 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.890359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.896486 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.897434 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.900792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"nova-cell0-cell-mapping-zwzx4\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.914335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.932252 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.933308 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.954493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.960279 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:35 crc kubenswrapper[4820]: I0221 07:07:35.996583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"nova-scheduler-0\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.004302 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067229 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.067490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.089697 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.096426 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.097925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.139096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170912 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.170994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.171698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.173691 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.183786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.184393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.184821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.185210 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.192967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.193507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.194553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"nova-metadata-0\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.202668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"nova-api-0\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.204427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.276917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.276997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277090 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277133 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.277164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.279815 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.315883 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.340558 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.351621 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.380770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.381920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.382222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.382810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.384743 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.384940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.406361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"dnsmasq-dns-849fff7679-pwg2d\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.438658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.650042 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.723501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerStarted","Data":"37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7"} Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.840577 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.869732 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.871259 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.877180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.877544 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.885330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:36 crc kubenswrapper[4820]: I0221 07:07:36.988721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.006611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.006704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.008637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.009267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.110998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111397 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.111438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.116818 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.118361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.122074 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.132587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"nova-cell1-conductor-db-sync-6rxdc\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.148463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.169320 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.239575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.268149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:07:37 crc kubenswrapper[4820]: W0221 07:07:37.286855 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc801035_b5e1_4e87_b8a1_c1d9474466c5.slice/crio-56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75 WatchSource:0}: Error finding container 56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75: Status 404 returned error can't find the container with id 56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75 Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.740631 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746058 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerID="5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636" exitCode=0 Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746130 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.746162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerStarted","Data":"56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.759591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerStarted","Data":"d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.762018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerStarted","Data":"6e559911a5c0b4319322723a73f4f2e1a523f0fbef9ae966ae10c0602b1eb11b"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.762078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.770167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"43724447d4673266639761e597dc790ef34ac85ca1a755c0b241a37ed12c81c4"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.776114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerStarted","Data":"e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882"} Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.812696 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zwzx4" podStartSLOduration=2.812653294 podStartE2EDuration="2.812653294s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:37.806515257 +0000 UTC m=+1232.839599465" watchObservedRunningTime="2026-02-21 07:07:37.812653294 +0000 UTC m=+1232.845737512" Feb 21 07:07:37 crc kubenswrapper[4820]: I0221 07:07:37.831797 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.787332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerStarted","Data":"22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2"} Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.789576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerStarted","Data":"9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70"} Feb 21 07:07:38 crc kubenswrapper[4820]: I0221 07:07:38.808456 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podStartSLOduration=2.808438666 podStartE2EDuration="2.808438666s" podCreationTimestamp="2026-02-21 07:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:38.805589078 +0000 UTC m=+1233.838673296" watchObservedRunningTime="2026-02-21 07:07:38.808438666 +0000 UTC m=+1233.841522864" Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.775332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.791970 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:39 crc kubenswrapper[4820]: I0221 07:07:39.802124 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.812743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerStarted","Data":"fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" containerID="cri-o://da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816980 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" containerID="cri-o://8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.816876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.817942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerStarted","Data":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.822535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.822567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerStarted","Data":"9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.825501 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerStarted","Data":"41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.830643 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" gracePeriod=30 Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.830851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerStarted","Data":"8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890"} Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.850607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.847247325 podStartE2EDuration="5.850585132s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:36.852414343 +0000 UTC m=+1231.885498541" lastFinishedPulling="2026-02-21 07:07:39.85575214 +0000 UTC m=+1234.888836348" observedRunningTime="2026-02-21 07:07:40.840568257 +0000 UTC m=+1235.873652455" watchObservedRunningTime="2026-02-21 07:07:40.850585132 +0000 UTC m=+1235.883669330" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.864977 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.038098933 podStartE2EDuration="5.864960147s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:37.124885453 +0000 UTC m=+1232.157969651" lastFinishedPulling="2026-02-21 07:07:39.951746667 +0000 UTC m=+1234.984830865" observedRunningTime="2026-02-21 07:07:40.864561116 +0000 UTC m=+1235.897645324" watchObservedRunningTime="2026-02-21 07:07:40.864960147 +0000 UTC m=+1235.898044345" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.887816 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" podStartSLOduration=4.887799605 podStartE2EDuration="4.887799605s" podCreationTimestamp="2026-02-21 07:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:40.885957984 +0000 UTC m=+1235.919042192" watchObservedRunningTime="2026-02-21 07:07:40.887799605 +0000 UTC m=+1235.920883803" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.915794 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.997424601 podStartE2EDuration="5.915776754s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:36.980223843 +0000 UTC m=+1232.013308041" lastFinishedPulling="2026-02-21 07:07:39.898575996 +0000 UTC m=+1234.931660194" observedRunningTime="2026-02-21 07:07:40.905430009 +0000 UTC m=+1235.938514207" watchObservedRunningTime="2026-02-21 07:07:40.915776754 +0000 UTC m=+1235.948860952" Feb 21 07:07:40 crc kubenswrapper[4820]: I0221 07:07:40.942261 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.250776759 podStartE2EDuration="5.942212909s" podCreationTimestamp="2026-02-21 07:07:35 +0000 UTC" firstStartedPulling="2026-02-21 07:07:37.165977405 +0000 UTC m=+1232.199061603" lastFinishedPulling="2026-02-21 07:07:39.857413555 +0000 UTC m=+1234.890497753" observedRunningTime="2026-02-21 07:07:40.933759247 +0000 UTC m=+1235.966843465" watchObservedRunningTime="2026-02-21 07:07:40.942212909 +0000 UTC m=+1235.975297107" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.283357 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.318340 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.318393 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.355469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.501120 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") pod \"03a82042-44f5-4238-ba8a-ec7650f46a93\" (UID: \"03a82042-44f5-4238-ba8a-ec7650f46a93\") " Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.607712 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs" (OuterVolumeSpecName: "logs") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.608645 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03a82042-44f5-4238-ba8a-ec7650f46a93-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.614628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck" (OuterVolumeSpecName: "kube-api-access-vbdck") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "kube-api-access-vbdck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.633918 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data" (OuterVolumeSpecName: "config-data") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.639889 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03a82042-44f5-4238-ba8a-ec7650f46a93" (UID: "03a82042-44f5-4238-ba8a-ec7650f46a93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709557 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbdck\" (UniqueName: \"kubernetes.io/projected/03a82042-44f5-4238-ba8a-ec7650f46a93-kube-api-access-vbdck\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709868 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.709882 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03a82042-44f5-4238-ba8a-ec7650f46a93-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.841256 4820 generic.go:334] "Generic (PLEG): container finished" podID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" exitCode=0 Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.841287 4820 generic.go:334] "Generic (PLEG): container finished" podID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" exitCode=143 Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.842043 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03a82042-44f5-4238-ba8a-ec7650f46a93","Type":"ContainerDied","Data":"43724447d4673266639761e597dc790ef34ac85ca1a755c0b241a37ed12c81c4"} Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.862583 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.922919 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.926712 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.945521 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.960511 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.961156 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961200 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} err="failed to get container status \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961246 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.961682 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961714 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} err="failed to get container status \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961735 4820 scope.go:117] "RemoveContainer" containerID="8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.961758 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.962284 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: E0221 07:07:41.962369 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962379 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962593 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-log" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962631 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" containerName="nova-metadata-metadata" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962625 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43"} err="failed to get container status \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": rpc error: code = NotFound desc = could not find container \"8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43\": container with ID starting with 8cba65c705e9d3d64722de5dc1877046b4051a6142f319267f3bed3b3b460f43 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.962650 4820 scope.go:117] "RemoveContainer" containerID="da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.963002 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67"} err="failed to get container status \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": rpc error: code = NotFound desc = could not find container \"da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67\": container with ID starting with da07d663c73365c203572e7609b3e8c4a546e1a60d55810add12e7b56eae6a67 not found: ID does not exist" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.963847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.966616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.966662 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:41 crc kubenswrapper[4820]: I0221 07:07:41.982957 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.024776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.024854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025255 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.025287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.126934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.126997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.127461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.131151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.132138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.133940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.154473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"nova-metadata-0\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.280766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.751121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:42 crc kubenswrapper[4820]: W0221 07:07:42.755939 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625e0821_44af_4965_aa51_75c1d5839e7c.slice/crio-97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec WatchSource:0}: Error finding container 97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec: Status 404 returned error can't find the container with id 97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec Feb 21 07:07:42 crc kubenswrapper[4820]: I0221 07:07:42.854438 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.705882 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a82042-44f5-4238-ba8a-ec7650f46a93" path="/var/lib/kubelet/pods/03a82042-44f5-4238-ba8a-ec7650f46a93/volumes" Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.816825 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.816916 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.863529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.863611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerStarted","Data":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} Feb 21 07:07:43 crc kubenswrapper[4820]: I0221 07:07:43.903086 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.903066339 podStartE2EDuration="2.903066339s" podCreationTimestamp="2026-02-21 07:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:43.892587052 +0000 UTC m=+1238.925671260" watchObservedRunningTime="2026-02-21 07:07:43.903066339 +0000 UTC m=+1238.936150547" Feb 21 07:07:45 crc kubenswrapper[4820]: I0221 07:07:45.882271 4820 generic.go:334] "Generic (PLEG): container finished" podID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerID="e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882" exitCode=0 Feb 21 07:07:45 crc kubenswrapper[4820]: I0221 07:07:45.882290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerDied","Data":"e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.281894 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.321163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.341556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.341731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.440498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.515195 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.515527 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" containerID="cri-o://f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" gracePeriod=10 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.901287 4820 generic.go:334] "Generic (PLEG): container finished" podID="68596d31-1da0-47aa-9330-179af16beee5" containerID="f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" exitCode=0 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.901461 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.904657 4820 generic.go:334] "Generic (PLEG): container finished" podID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerID="f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" exitCode=137 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.904822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.906298 4820 generic.go:334] "Generic (PLEG): container finished" podID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerID="41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714" exitCode=0 Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.907232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerDied","Data":"41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714"} Feb 21 07:07:46 crc kubenswrapper[4820]: I0221 07:07:46.947869 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.047883 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.211321 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241611 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241809 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.241935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") pod \"31265e58-52ac-4a6c-86b2-ec212e0ed318\" (UID: \"31265e58-52ac-4a6c-86b2-ec212e0ed318\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.242627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.248348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.257142 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh" (OuterVolumeSpecName: "kube-api-access-rrwmh") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "kube-api-access-rrwmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.257156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts" (OuterVolumeSpecName: "scripts") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.281529 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.281575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.286679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.317346 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342881 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.342993 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343209 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") pod \"68596d31-1da0-47aa-9330-179af16beee5\" (UID: \"68596d31-1da0-47aa-9330-179af16beee5\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343778 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343803 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31265e58-52ac-4a6c-86b2-ec212e0ed318-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343814 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343827 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.343841 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwmh\" (UniqueName: \"kubernetes.io/projected/31265e58-52ac-4a6c-86b2-ec212e0ed318-kube-api-access-rrwmh\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.347581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.350992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj" (OuterVolumeSpecName: "kube-api-access-mp4lj") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "kube-api-access-mp4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.388434 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.424867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.429607 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.442717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.442913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.444321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config" (OuterVolumeSpecName: "config") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.445788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") pod \"4d96a68b-1b90-4fcd-9716-679be14d3157\" (UID: \"4d96a68b-1b90-4fcd-9716-679be14d3157\") " Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.447494 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448161 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448198 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448212 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp4lj\" (UniqueName: \"kubernetes.io/projected/68596d31-1da0-47aa-9330-179af16beee5-kube-api-access-mp4lj\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448223 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.448233 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.449321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts" (OuterVolumeSpecName: "scripts") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.450039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68596d31-1da0-47aa-9330-179af16beee5" (UID: "68596d31-1da0-47aa-9330-179af16beee5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.453345 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data" (OuterVolumeSpecName: "config-data") pod "31265e58-52ac-4a6c-86b2-ec212e0ed318" (UID: "31265e58-52ac-4a6c-86b2-ec212e0ed318"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.454554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr" (OuterVolumeSpecName: "kube-api-access-nrtgr") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "kube-api-access-nrtgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.478900 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.479408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data" (OuterVolumeSpecName: "config-data") pod "4d96a68b-1b90-4fcd-9716-679be14d3157" (UID: "4d96a68b-1b90-4fcd-9716-679be14d3157"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.549967 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31265e58-52ac-4a6c-86b2-ec212e0ed318-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550003 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68596d31-1da0-47aa-9330-179af16beee5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrtgr\" (UniqueName: \"kubernetes.io/projected/4d96a68b-1b90-4fcd-9716-679be14d3157-kube-api-access-nrtgr\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550026 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550033 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.550041 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d96a68b-1b90-4fcd-9716-679be14d3157-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31265e58-52ac-4a6c-86b2-ec212e0ed318","Type":"ContainerDied","Data":"25c9cdbbef70f629279a4f41b39405f02ae6d43bb63394c726f462fd5002be7a"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.921725 4820 scope.go:117] "RemoveContainer" containerID="f5c804fb1acb9c9a861723b2c8e5a22293c9ea892e126697bf9690d8f473209d" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.925433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" event={"ID":"68596d31-1da0-47aa-9330-179af16beee5","Type":"ContainerDied","Data":"28ad0df7b26bbd0219980c2f8c1104679c4b4d8454ba1005ca678ce2d979fa35"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.925545 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c69c79c7f-n4pc2" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.935553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zwzx4" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.936478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zwzx4" event={"ID":"4d96a68b-1b90-4fcd-9716-679be14d3157","Type":"ContainerDied","Data":"37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7"} Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.936523 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37bd9df794135c79db6e9ba865bdb9c7e4ef5f96af9345bee31518d57b7081f7" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.956026 4820 scope.go:117] "RemoveContainer" containerID="400d57d6a004990f14afdf231154959e4890616c3cf7d1921676480ca781b28f" Feb 21 07:07:47 crc kubenswrapper[4820]: I0221 07:07:47.988331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.007762 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.012020 4820 scope.go:117] "RemoveContainer" containerID="becbc2643cc69b769ac18f5227bd7ddcb7a1b80bb9f754bac7d9c64e0e943e53" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.028537 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.038307 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c69c79c7f-n4pc2"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.046870 4820 scope.go:117] "RemoveContainer" containerID="fa1157f95ebab043575672bfe021a1abb9b6b0fa51b6e45dd82063699dc6ecf9" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050120 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050547 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050563 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050579 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050586 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050599 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="init" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050605 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="init" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050623 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050646 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050660 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050667 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: E0221 07:07:48.050686 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.050692 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054323 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-central-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054360 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="sg-core" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054375 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="68596d31-1da0-47aa-9330-179af16beee5" containerName="dnsmasq-dns" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054387 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="ceilometer-notification-agent" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054398 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" containerName="nova-manage" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.054409 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" containerName="proxy-httpd" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.055971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.062291 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.063702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.063803 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.088563 4820 scope.go:117] "RemoveContainer" containerID="f652bc5f84c383e4df28b7028766cbc0147be5d396eb0aeb52cbd94dbc2ad6ed" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132090 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132483 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" containerID="cri-o://9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.132910 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" containerID="cri-o://f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.155022 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169754 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169880 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.169980 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.181524 4820 scope.go:117] "RemoveContainer" containerID="aab33edaeb25dccd647f693bcaba1307465b538dbe3fc05e9d81c6d78bcc4858" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197584 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197931 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" containerID="cri-o://97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.197984 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" containerID="cri-o://b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.273349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.274723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.274944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.278403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.279388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.280388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.287591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.292971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"ceilometer-0\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.414406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.466693 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.482973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") pod \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\" (UID: \"1b3f478b-4142-46b8-a9ca-603e9e1860ac\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.487507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv" (OuterVolumeSpecName: "kube-api-access-2rcpv") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "kube-api-access-2rcpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.491685 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts" (OuterVolumeSpecName: "scripts") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.529874 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data" (OuterVolumeSpecName: "config-data") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.549586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3f478b-4142-46b8-a9ca-603e9e1860ac" (UID: "1b3f478b-4142-46b8-a9ca-603e9e1860ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585215 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585472 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585557 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3f478b-4142-46b8-a9ca-603e9e1860ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.585628 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rcpv\" (UniqueName: \"kubernetes.io/projected/1b3f478b-4142-46b8-a9ca-603e9e1860ac-kube-api-access-2rcpv\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.799598 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951320 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" event={"ID":"1b3f478b-4142-46b8-a9ca-603e9e1860ac","Type":"ContainerDied","Data":"22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951363 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e61ebbd8028a1aa9eec99aa283035d5fdfc12cc29f2dbc4e516b9b929c2ac2" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.951434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6rxdc" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955617 4820 generic.go:334] "Generic (PLEG): container finished" podID="625e0821-44af-4965-aa51-75c1d5839e7c" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" exitCode=0 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955645 4820 generic.go:334] "Generic (PLEG): container finished" podID="625e0821-44af-4965-aa51-75c1d5839e7c" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" exitCode=143 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955703 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"625e0821-44af-4965-aa51-75c1d5839e7c","Type":"ContainerDied","Data":"97fb15d00cfc76b9f7c9d18ac0d54f998ad88f4be3de91f63fa0fc834066feec"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.955823 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.960140 4820 generic.go:334] "Generic (PLEG): container finished" podID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerID="9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" exitCode=143 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.960217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39"} Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.973914 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" containerID="cri-o://fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" gracePeriod=30 Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.990710 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.991172 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994065 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") pod \"625e0821-44af-4965-aa51-75c1d5839e7c\" (UID: \"625e0821-44af-4965-aa51-75c1d5839e7c\") " Feb 21 07:07:48 crc kubenswrapper[4820]: I0221 07:07:48.994988 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs" (OuterVolumeSpecName: "logs") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.014429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd" (OuterVolumeSpecName: "kube-api-access-7mptd") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "kube-api-access-7mptd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.028865 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029397 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029461 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029469 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.029506 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029514 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029741 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-log" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029773 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" containerName="nova-cell1-conductor-db-sync" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.029788 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" containerName="nova-metadata-metadata" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.030637 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.042069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.043349 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.044758 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.073423 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data" (OuterVolumeSpecName: "config-data") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099751 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625e0821-44af-4965-aa51-75c1d5839e7c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099769 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mptd\" (UniqueName: \"kubernetes.io/projected/625e0821-44af-4965-aa51-75c1d5839e7c-kube-api-access-7mptd\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099780 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.099790 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.111749 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.113529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "625e0821-44af-4965-aa51-75c1d5839e7c" (UID: "625e0821-44af-4965-aa51-75c1d5839e7c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.114414 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114449 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} err="failed to get container status \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114488 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: E0221 07:07:49.114775 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.114804 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} err="failed to get container status \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115025 4820 scope.go:117] "RemoveContainer" containerID="b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115846 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c"} err="failed to get container status \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": rpc error: code = NotFound desc = could not find container \"b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c\": container with ID starting with b216c07d395c20f7276fb8b2f0d79b34e48e35c45c1e823e0b8802e4c73b803c not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.115893 4820 scope.go:117] "RemoveContainer" containerID="97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.116330 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341"} err="failed to get container status \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": rpc error: code = NotFound desc = could not find container \"97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341\": container with ID starting with 97403c59e1c92f948d4175b468a8bd16ad4a286639c8fea6fc00799cad640341 not found: ID does not exist" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.201379 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/625e0821-44af-4965-aa51-75c1d5839e7c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.204872 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.207015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.217810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"nova-cell1-conductor-0\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.294146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.310884 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.330318 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.331812 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.339942 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.340130 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.348002 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.402839 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.508986 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.610634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.614526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.617936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.618204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.632660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"nova-metadata-0\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.658801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.711324 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31265e58-52ac-4a6c-86b2-ec212e0ed318" path="/var/lib/kubelet/pods/31265e58-52ac-4a6c-86b2-ec212e0ed318/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.712190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625e0821-44af-4965-aa51-75c1d5839e7c" path="/var/lib/kubelet/pods/625e0821-44af-4965-aa51-75c1d5839e7c/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.713499 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68596d31-1da0-47aa-9330-179af16beee5" path="/var/lib/kubelet/pods/68596d31-1da0-47aa-9330-179af16beee5/volumes" Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.880463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.987947 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.987987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"5a11214c736375d2dba7c27ccd6b9be4d089093f6403a91834a045bac0e3cf8d"} Feb 21 07:07:49 crc kubenswrapper[4820]: I0221 07:07:49.993904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerStarted","Data":"167b5165b4391c8783b551aad0df3cc918db35e3f8cb50ff81e948ca2a961b4f"} Feb 21 07:07:50 crc kubenswrapper[4820]: I0221 07:07:50.110047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:07:50 crc kubenswrapper[4820]: W0221 07:07:50.113152 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ec23217_e99b_4b39_8be4_d4278275c14b.slice/crio-394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49 WatchSource:0}: Error finding container 394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49: Status 404 returned error can't find the container with id 394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49 Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.014332 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017178 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.017193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerStarted","Data":"394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.023402 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerStarted","Data":"4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c"} Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.023571 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.036760 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.036744857 podStartE2EDuration="2.036744857s" podCreationTimestamp="2026-02-21 07:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:51.034157486 +0000 UTC m=+1246.067241674" watchObservedRunningTime="2026-02-21 07:07:51.036744857 +0000 UTC m=+1246.069829055" Feb 21 07:07:51 crc kubenswrapper[4820]: I0221 07:07:51.058287 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.058271129 podStartE2EDuration="3.058271129s" podCreationTimestamp="2026-02-21 07:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:51.049709383 +0000 UTC m=+1246.082793591" watchObservedRunningTime="2026-02-21 07:07:51.058271129 +0000 UTC m=+1246.091355327" Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.283224 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.284654 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.286158 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:07:51 crc kubenswrapper[4820]: E0221 07:07:51.286188 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.039190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.043857 4820 generic.go:334] "Generic (PLEG): container finished" podID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" exitCode=0 Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.044691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerDied","Data":"fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128"} Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.357937 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.373695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") pod \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\" (UID: \"df035ce1-8e9b-4e72-a751-a56a7a2a613a\") " Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.383300 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z" (OuterVolumeSpecName: "kube-api-access-gbz8z") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "kube-api-access-gbz8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.423515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.424415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data" (OuterVolumeSpecName: "config-data") pod "df035ce1-8e9b-4e72-a751-a56a7a2a613a" (UID: "df035ce1-8e9b-4e72-a751-a56a7a2a613a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478007 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbz8z\" (UniqueName: \"kubernetes.io/projected/df035ce1-8e9b-4e72-a751-a56a7a2a613a-kube-api-access-gbz8z\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478336 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:52 crc kubenswrapper[4820]: I0221 07:07:52.478349 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df035ce1-8e9b-4e72-a751-a56a7a2a613a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.065487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerStarted","Data":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.065722 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.067813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df035ce1-8e9b-4e72-a751-a56a7a2a613a","Type":"ContainerDied","Data":"6e559911a5c0b4319322723a73f4f2e1a523f0fbef9ae966ae10c0602b1eb11b"} Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.067867 4820 scope.go:117] "RemoveContainer" containerID="fc2a39c65f9cae39af32572fb397cd1f2ed6925e3c1dcc3217ebcd6c2c8bd128" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.068088 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.111362 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.549987569 podStartE2EDuration="6.111342217s" podCreationTimestamp="2026-02-21 07:07:47 +0000 UTC" firstStartedPulling="2026-02-21 07:07:49.05505287 +0000 UTC m=+1244.088137068" lastFinishedPulling="2026-02-21 07:07:52.616407518 +0000 UTC m=+1247.649491716" observedRunningTime="2026-02-21 07:07:53.091839501 +0000 UTC m=+1248.124923699" watchObservedRunningTime="2026-02-21 07:07:53.111342217 +0000 UTC m=+1248.144426415" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.118206 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.132619 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144182 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: E0221 07:07:53.144618 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144633 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.144811 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" containerName="nova-scheduler-scheduler" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.145436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.150421 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.163027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.215886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.215998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.216118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.317382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.317556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.318299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.330491 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.334255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.337094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"nova-scheduler-0\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.468215 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.717733 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df035ce1-8e9b-4e72-a751-a56a7a2a613a" path="/var/lib/kubelet/pods/df035ce1-8e9b-4e72-a751-a56a7a2a613a/volumes" Feb 21 07:07:53 crc kubenswrapper[4820]: W0221 07:07:53.981146 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce77df06_566a_45b8_83f6_b788a3c81757.slice/crio-a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675 WatchSource:0}: Error finding container a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675: Status 404 returned error can't find the container with id a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675 Feb 21 07:07:53 crc kubenswrapper[4820]: I0221 07:07:53.986057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078541 4820 generic.go:334] "Generic (PLEG): container finished" podID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerID="f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" exitCode=0 Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61942ced-fcab-4240-b49a-ff65cdeceb00","Type":"ContainerDied","Data":"c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.078964 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40349a2af25367fc0c110fa968f40da829ddde9d2559551b284dfe24a879a9e" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.084057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerStarted","Data":"a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675"} Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.086145 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132782 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.132908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133038 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") pod \"61942ced-fcab-4240-b49a-ff65cdeceb00\" (UID: \"61942ced-fcab-4240-b49a-ff65cdeceb00\") " Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs" (OuterVolumeSpecName: "logs") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.133653 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61942ced-fcab-4240-b49a-ff65cdeceb00-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.136645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm" (OuterVolumeSpecName: "kube-api-access-pqpwm") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "kube-api-access-pqpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.167120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data" (OuterVolumeSpecName: "config-data") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.169225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61942ced-fcab-4240-b49a-ff65cdeceb00" (UID: "61942ced-fcab-4240-b49a-ff65cdeceb00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236299 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqpwm\" (UniqueName: \"kubernetes.io/projected/61942ced-fcab-4240-b49a-ff65cdeceb00-kube-api-access-pqpwm\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236333 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.236343 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61942ced-fcab-4240-b49a-ff65cdeceb00-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.663008 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:54 crc kubenswrapper[4820]: I0221 07:07:54.664404 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.093050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerStarted","Data":"e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec"} Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.093107 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.117339 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.117326222 podStartE2EDuration="2.117326222s" podCreationTimestamp="2026-02-21 07:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:55.11291918 +0000 UTC m=+1250.146003378" watchObservedRunningTime="2026-02-21 07:07:55.117326222 +0000 UTC m=+1250.150410420" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.141143 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.148409 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176274 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: E0221 07:07:55.176620 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176635 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: E0221 07:07:55.176653 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176660 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176829 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-log" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.176845 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" containerName="nova-api-api" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.177779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.181672 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.194140 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.270865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.270939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.271087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.271257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373300 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.373754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.377575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.377580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.399345 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"nova-api-0\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.531413 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:07:55 crc kubenswrapper[4820]: I0221 07:07:55.735083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61942ced-fcab-4240-b49a-ff65cdeceb00" path="/var/lib/kubelet/pods/61942ced-fcab-4240-b49a-ff65cdeceb00/volumes" Feb 21 07:07:56 crc kubenswrapper[4820]: I0221 07:07:56.014750 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:07:56 crc kubenswrapper[4820]: I0221 07:07:56.114405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"c093ecf877a602a1aabdc4e519a2734f4375ca978ae6815cef185d7613344b66"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.123933 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.124323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerStarted","Data":"33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9"} Feb 21 07:07:57 crc kubenswrapper[4820]: I0221 07:07:57.155121 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.155098359 podStartE2EDuration="2.155098359s" podCreationTimestamp="2026-02-21 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:07:57.146989046 +0000 UTC m=+1252.180073244" watchObservedRunningTime="2026-02-21 07:07:57.155098359 +0000 UTC m=+1252.188182557" Feb 21 07:07:58 crc kubenswrapper[4820]: I0221 07:07:58.468486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.432581 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.662439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:07:59 crc kubenswrapper[4820]: I0221 07:07:59.662747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:00 crc kubenswrapper[4820]: I0221 07:08:00.673421 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:00 crc kubenswrapper[4820]: I0221 07:08:00.673426 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:03 crc kubenswrapper[4820]: I0221 07:08:03.469911 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:08:03 crc kubenswrapper[4820]: I0221 07:08:03.500832 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:08:04 crc kubenswrapper[4820]: I0221 07:08:04.222009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:08:05 crc kubenswrapper[4820]: I0221 07:08:05.533698 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:05 crc kubenswrapper[4820]: I0221 07:08:05.533997 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:06 crc kubenswrapper[4820]: I0221 07:08:06.616382 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:06 crc kubenswrapper[4820]: I0221 07:08:06.616398 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.667658 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.668301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.676366 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:09 crc kubenswrapper[4820]: I0221 07:08:09.676860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264030 4820 generic.go:334] "Generic (PLEG): container finished" podID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerID="8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" exitCode=137 Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerDied","Data":"8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890"} Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24fcfcd7-30d6-4101-af31-619b24afcb8d","Type":"ContainerDied","Data":"d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc"} Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.264429 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0bbd16e326afec3eb0db3b65db6e116903c8b3a5f97fc7f0031dfc181db09dc" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.282391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.426494 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.426639 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.427099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") pod \"24fcfcd7-30d6-4101-af31-619b24afcb8d\" (UID: \"24fcfcd7-30d6-4101-af31-619b24afcb8d\") " Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.432501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh" (OuterVolumeSpecName: "kube-api-access-s75qh") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "kube-api-access-s75qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.452134 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data" (OuterVolumeSpecName: "config-data") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.454654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fcfcd7-30d6-4101-af31-619b24afcb8d" (UID: "24fcfcd7-30d6-4101-af31-619b24afcb8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528920 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s75qh\" (UniqueName: \"kubernetes.io/projected/24fcfcd7-30d6-4101-af31-619b24afcb8d-kube-api-access-s75qh\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528952 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:11 crc kubenswrapper[4820]: I0221 07:08:11.528962 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24fcfcd7-30d6-4101-af31-619b24afcb8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.272903 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.298757 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.322529 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341007 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: E0221 07:08:12.341478 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341497 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.341682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.342367 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344692 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.344712 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.352113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.446317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.446381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.447510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.548976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.549526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.552964 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.553352 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.554280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.554645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.568014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:12 crc kubenswrapper[4820]: I0221 07:08:12.681766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.162460 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:08:13 crc kubenswrapper[4820]: W0221 07:08:13.168850 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1db760_d9fc_477f_bc0b_8119d247253b.slice/crio-6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff WatchSource:0}: Error finding container 6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff: Status 404 returned error can't find the container with id 6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.283749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerStarted","Data":"6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff"} Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.708028 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fcfcd7-30d6-4101-af31-619b24afcb8d" path="/var/lib/kubelet/pods/24fcfcd7-30d6-4101-af31-619b24afcb8d/volumes" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815608 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815889 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.815926 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.816562 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:08:13 crc kubenswrapper[4820]: I0221 07:08:13.816616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" gracePeriod=600 Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.293775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerStarted","Data":"24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297748 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" exitCode=0 Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297791 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.297830 4820 scope.go:117] "RemoveContainer" containerID="a0f682e2000efd774d622a3e32ffb0bf77aef757862422932fda82c7a3e96b5c" Feb 21 07:08:14 crc kubenswrapper[4820]: I0221 07:08:14.317998 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.317972528 podStartE2EDuration="2.317972528s" podCreationTimestamp="2026-02-21 07:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:14.308390025 +0000 UTC m=+1269.341474243" watchObservedRunningTime="2026-02-21 07:08:14.317972528 +0000 UTC m=+1269.351056726" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.536593 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.536964 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.537284 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.537320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.539764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.540495 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.725572 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.727020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.743018 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.919678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.919983 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:15 crc kubenswrapper[4820]: I0221 07:08:15.920433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.021972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.022902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.023071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.023171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.042567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dnsmasq-dns-58f6456c9f-zb9jc\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.044324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:16 crc kubenswrapper[4820]: I0221 07:08:16.537283 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:08:16 crc kubenswrapper[4820]: W0221 07:08:16.539151 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc228462_9ac8_475c_859b_bbce5678a5ea.slice/crio-d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451 WatchSource:0}: Error finding container d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451: Status 404 returned error can't find the container with id d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329571 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" exitCode=0 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61"} Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.329872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerStarted","Data":"d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451"} Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.681941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882126 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882584 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" containerID="cri-o://8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882676 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" containerID="cri-o://3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882725 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" containerID="cri-o://e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.882692 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" containerID="cri-o://e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" gracePeriod=30 Feb 21 07:08:17 crc kubenswrapper[4820]: I0221 07:08:17.895947 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.195:3000/\": EOF" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343283 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" exitCode=0 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343613 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" exitCode=2 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343626 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" exitCode=0 Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.343727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.346302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerStarted","Data":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.346454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.368952 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" podStartSLOduration=3.368932778 podStartE2EDuration="3.368932778s" podCreationTimestamp="2026-02-21 07:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:18.362482401 +0000 UTC m=+1273.395566609" watchObservedRunningTime="2026-02-21 07:08:18.368932778 +0000 UTC m=+1273.402016996" Feb 21 07:08:18 crc kubenswrapper[4820]: I0221 07:08:18.415294 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.195:3000/\": dial tcp 10.217.0.195:3000: connect: connection refused" Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.017573 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.018124 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" containerID="cri-o://33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" gracePeriod=30 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.018278 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" containerID="cri-o://b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" gracePeriod=30 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.355701 4820 generic.go:334] "Generic (PLEG): container finished" podID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerID="33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" exitCode=143 Feb 21 07:08:19 crc kubenswrapper[4820]: I0221 07:08:19.355740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9"} Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.384832 4820 generic.go:334] "Generic (PLEG): container finished" podID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerID="b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" exitCode=0 Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.384909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76"} Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.647582 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.682281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.724463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759254 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759412 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.759525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") pod \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\" (UID: \"bdf0d0bd-7674-4b6d-8e43-e199356ee168\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.760830 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs" (OuterVolumeSpecName: "logs") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.785720 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7" (OuterVolumeSpecName: "kube-api-access-stcb7") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "kube-api-access-stcb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.805973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.813652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data" (OuterVolumeSpecName: "config-data") pod "bdf0d0bd-7674-4b6d-8e43-e199356ee168" (UID: "bdf0d0bd-7674-4b6d-8e43-e199356ee168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.824164 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861868 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stcb7\" (UniqueName: \"kubernetes.io/projected/bdf0d0bd-7674-4b6d-8e43-e199356ee168-kube-api-access-stcb7\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861913 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861928 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf0d0bd-7674-4b6d-8e43-e199356ee168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.861940 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf0d0bd-7674-4b6d-8e43-e199356ee168-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.963813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964223 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964336 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.964920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.965766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.965993 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.966067 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/650275e2-1f20-427a-89b7-de2c084d3b40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.971418 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts" (OuterVolumeSpecName: "scripts") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:22 crc kubenswrapper[4820]: I0221 07:08:22.972336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm" (OuterVolumeSpecName: "kube-api-access-v7pwm") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "kube-api-access-v7pwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.004354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.055814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066268 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data" (OuterVolumeSpecName: "config-data") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") pod \"650275e2-1f20-427a-89b7-de2c084d3b40\" (UID: \"650275e2-1f20-427a-89b7-de2c084d3b40\") " Feb 21 07:08:23 crc kubenswrapper[4820]: W0221 07:08:23.066904 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/650275e2-1f20-427a-89b7-de2c084d3b40/volumes/kubernetes.io~secret/config-data Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.066922 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data" (OuterVolumeSpecName: "config-data") pod "650275e2-1f20-427a-89b7-de2c084d3b40" (UID: "650275e2-1f20-427a-89b7-de2c084d3b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067322 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067338 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067346 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067355 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650275e2-1f20-427a-89b7-de2c084d3b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.067363 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7pwm\" (UniqueName: \"kubernetes.io/projected/650275e2-1f20-427a-89b7-de2c084d3b40-kube-api-access-v7pwm\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396144 4820 generic.go:334] "Generic (PLEG): container finished" podID="650275e2-1f20-427a-89b7-de2c084d3b40" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" exitCode=0 Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396224 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"650275e2-1f20-427a-89b7-de2c084d3b40","Type":"ContainerDied","Data":"5a11214c736375d2dba7c27ccd6b9be4d089093f6403a91834a045bac0e3cf8d"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.396363 4820 scope.go:117] "RemoveContainer" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.399084 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.401377 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdf0d0bd-7674-4b6d-8e43-e199356ee168","Type":"ContainerDied","Data":"c093ecf877a602a1aabdc4e519a2734f4375ca978ae6815cef185d7613344b66"} Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.418011 4820 scope.go:117] "RemoveContainer" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.422628 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.439002 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.450676 4820 scope.go:117] "RemoveContainer" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.460781 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.473291 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.489757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490289 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490310 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490322 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490374 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490384 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490413 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490421 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.490432 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490643 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-central-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490672 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="sg-core" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490688 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="proxy-httpd" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490700 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-log" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490721 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" containerName="nova-api-api" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.490731 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" containerName="ceilometer-notification-agent" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.491983 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.493449 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503412 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.503552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.508679 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.508829 4820 scope.go:117] "RemoveContainer" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.518637 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.521740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.530021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.530577 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.539462 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.540492 4820 scope.go:117] "RemoveContainer" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.542170 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": container with ID starting with e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89 not found: ID does not exist" containerID="e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.542325 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89"} err="failed to get container status \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": rpc error: code = NotFound desc = could not find container \"e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89\": container with ID starting with e776f01dbfbf31f0e8ec26f6b4f82a8d20e32558baf445c06b09b66ee1a37d89 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.543402 4820 scope.go:117] "RemoveContainer" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.544015 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": container with ID starting with 3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b not found: ID does not exist" containerID="3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.544078 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b"} err="failed to get container status \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": rpc error: code = NotFound desc = could not find container \"3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b\": container with ID starting with 3ac7cfe98109aa41eba99510616e6fb71cf8902b4333acd8c0dadd3ed6e7005b not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.544116 4820 scope.go:117] "RemoveContainer" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.553910 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": container with ID starting with e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873 not found: ID does not exist" containerID="e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.553969 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873"} err="failed to get container status \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": rpc error: code = NotFound desc = could not find container \"e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873\": container with ID starting with e5d4e2991496a99d81ca276c8533b04e91c5c19d6686b901c776b254280be873 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554004 4820 scope.go:117] "RemoveContainer" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: E0221 07:08:23.554514 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": container with ID starting with 8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89 not found: ID does not exist" containerID="8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554543 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89"} err="failed to get container status \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": rpc error: code = NotFound desc = could not find container \"8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89\": container with ID starting with 8daa447df4a85defb5c8475985357e02fe2aae983f4f7af0e45df14ccba83d89 not found: ID does not exist" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.554563 4820 scope.go:117] "RemoveContainer" containerID="b8c2b5c4ec32625c6a5a30f50d7d06894d0b3ce572fc22fbec2a33bae4f32c76" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.572339 4820 scope.go:117] "RemoveContainer" containerID="33149263d791a43228a6de8d4d236b3ce924b0d31851e00fcc0b14be3b0951c9" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.575759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678223 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678303 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678370 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678669 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.678764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.679544 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.684523 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.687009 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.693493 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.693709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.694589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.709414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.709959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"nova-api-0\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.712674 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650275e2-1f20-427a-89b7-de2c084d3b40" path="/var/lib/kubelet/pods/650275e2-1f20-427a-89b7-de2c084d3b40/volumes" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.713642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf0d0bd-7674-4b6d-8e43-e199356ee168" path="/var/lib/kubelet/pods/bdf0d0bd-7674-4b6d-8e43-e199356ee168/volumes" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.714384 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780567 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780723 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.780749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.781355 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.781588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.785953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.787406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.787798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.793994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.801225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"ceilometer-0\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.835861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.854797 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.882794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.887516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.888584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.889066 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:23 crc kubenswrapper[4820]: I0221 07:08:23.901150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"nova-cell1-cell-mapping-k9s8t\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.180965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.327536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:24 crc kubenswrapper[4820]: W0221 07:08:24.401969 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod118c08af_2bde_440a_a9cf_ad089288aae6.slice/crio-a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e WatchSource:0}: Error finding container a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e: Status 404 returned error can't find the container with id a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.404626 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.421734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"d8c35a16f23800f75c01941a376d91e2f3aecbf3120d50dff54c29623c837dbf"} Feb 21 07:08:24 crc kubenswrapper[4820]: I0221 07:08:24.641385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.444860 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.448090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.448134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerStarted","Data":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.453007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerStarted","Data":"ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.453046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerStarted","Data":"04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852"} Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.490426 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k9s8t" podStartSLOduration=2.490410251 podStartE2EDuration="2.490410251s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:25.486601987 +0000 UTC m=+1280.519686185" watchObservedRunningTime="2026-02-21 07:08:25.490410251 +0000 UTC m=+1280.523494439" Feb 21 07:08:25 crc kubenswrapper[4820]: I0221 07:08:25.494310 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.494299698 podStartE2EDuration="2.494299698s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:25.470850044 +0000 UTC m=+1280.503934262" watchObservedRunningTime="2026-02-21 07:08:25.494299698 +0000 UTC m=+1280.527383896" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.046335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.140322 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.140899 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" containerID="cri-o://9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" gracePeriod=10 Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.489065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.489406 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.492192 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerID="9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" exitCode=0 Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.492898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70"} Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.742604 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842224 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.842488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") pod \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\" (UID: \"bc801035-b5e1-4e87-b8a1-c1d9474466c5\") " Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.858777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq" (OuterVolumeSpecName: "kube-api-access-tz8qq") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "kube-api-access-tz8qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.902436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config" (OuterVolumeSpecName: "config") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.910125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.919682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.921005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.932010 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc801035-b5e1-4e87-b8a1-c1d9474466c5" (UID: "bc801035-b5e1-4e87-b8a1-c1d9474466c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz8qq\" (UniqueName: \"kubernetes.io/projected/bc801035-b5e1-4e87-b8a1-c1d9474466c5-kube-api-access-tz8qq\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944655 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944667 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944675 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944683 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:26 crc kubenswrapper[4820]: I0221 07:08:26.944690 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc801035-b5e1-4e87-b8a1-c1d9474466c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.502754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.504868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" event={"ID":"bc801035-b5e1-4e87-b8a1-c1d9474466c5","Type":"ContainerDied","Data":"56706ef7edc4c45f0fb9cf68555159ed6f3a3b2712a13f674db70d52356a6d75"} Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.504907 4820 scope.go:117] "RemoveContainer" containerID="9b56ec3e0ab84221e159324991d7abf3d8befbacabd9ffbd2b2a9e9b5dadad70" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.505020 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.539163 4820 scope.go:117] "RemoveContainer" containerID="5f5ab8e6435ddfdb4e8c77819cee3cfc2fa9fc05ae6a9ae155da8503f7b0f636" Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.561312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.579519 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849fff7679-pwg2d"] Feb 21 07:08:27 crc kubenswrapper[4820]: I0221 07:08:27.708252 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" path="/var/lib/kubelet/pods/bc801035-b5e1-4e87-b8a1-c1d9474466c5/volumes" Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.523806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerStarted","Data":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.524454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:08:29 crc kubenswrapper[4820]: I0221 07:08:29.550607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.658710976 podStartE2EDuration="6.550589805s" podCreationTimestamp="2026-02-21 07:08:23 +0000 UTC" firstStartedPulling="2026-02-21 07:08:24.41555404 +0000 UTC m=+1279.448638238" lastFinishedPulling="2026-02-21 07:08:28.307432869 +0000 UTC m=+1283.340517067" observedRunningTime="2026-02-21 07:08:29.543375536 +0000 UTC m=+1284.576459754" watchObservedRunningTime="2026-02-21 07:08:29.550589805 +0000 UTC m=+1284.583674003" Feb 21 07:08:30 crc kubenswrapper[4820]: I0221 07:08:30.537494 4820 generic.go:334] "Generic (PLEG): container finished" podID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerID="ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36" exitCode=0 Feb 21 07:08:30 crc kubenswrapper[4820]: I0221 07:08:30.537618 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerDied","Data":"ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36"} Feb 21 07:08:31 crc kubenswrapper[4820]: I0221 07:08:31.439682 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-849fff7679-pwg2d" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: i/o timeout" Feb 21 07:08:31 crc kubenswrapper[4820]: I0221 07:08:31.914395 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.043951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") pod \"06da7378-1c64-43e9-8d97-63a92fe503fc\" (UID: \"06da7378-1c64-43e9-8d97-63a92fe503fc\") " Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.049961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts" (OuterVolumeSpecName: "scripts") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.053609 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g" (OuterVolumeSpecName: "kube-api-access-bf84g") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "kube-api-access-bf84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.073503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data" (OuterVolumeSpecName: "config-data") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.076791 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06da7378-1c64-43e9-8d97-63a92fe503fc" (UID: "06da7378-1c64-43e9-8d97-63a92fe503fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146691 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf84g\" (UniqueName: \"kubernetes.io/projected/06da7378-1c64-43e9-8d97-63a92fe503fc-kube-api-access-bf84g\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146722 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146731 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.146739 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06da7378-1c64-43e9-8d97-63a92fe503fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555083 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k9s8t" event={"ID":"06da7378-1c64-43e9-8d97-63a92fe503fc","Type":"ContainerDied","Data":"04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852"} Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555382 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a5336f0b77ca4614d6925d537b6faf02dd14a95bdd172301cd6fd45ae3d852" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.555139 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k9s8t" Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.744343 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.744547 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" containerID="cri-o://e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.795525 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.795814 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" containerID="cri-o://0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.796396 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" containerID="cri-o://c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.807685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.807911 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" containerID="cri-o://4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" gracePeriod=30 Feb 21 07:08:32 crc kubenswrapper[4820]: I0221 07:08:32.808331 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" containerID="cri-o://a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" gracePeriod=30 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.456940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.471333 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.472919 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.474412 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.474474 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564411 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10708bc-02ad-4956-95a6-ae03aa172988" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" exitCode=0 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564442 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10708bc-02ad-4956-95a6-ae03aa172988" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" exitCode=143 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f10708bc-02ad-4956-95a6-ae03aa172988","Type":"ContainerDied","Data":"d8c35a16f23800f75c01941a376d91e2f3aecbf3120d50dff54c29623c837dbf"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564533 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.564559 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.566315 4820 generic.go:334] "Generic (PLEG): container finished" podID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" exitCode=143 Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.566346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.581968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582008 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") pod \"f10708bc-02ad-4956-95a6-ae03aa172988\" (UID: \"f10708bc-02ad-4956-95a6-ae03aa172988\") " Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs" (OuterVolumeSpecName: "logs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.582513 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10708bc-02ad-4956-95a6-ae03aa172988-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.585525 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.587001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd" (OuterVolumeSpecName: "kube-api-access-4qbqd") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "kube-api-access-4qbqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.603656 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.604117 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604149 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} err="failed to get container status \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604169 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.604543 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604603 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} err="failed to get container status \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604623 4820 scope.go:117] "RemoveContainer" containerID="c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604900 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b"} err="failed to get container status \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": rpc error: code = NotFound desc = could not find container \"c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b\": container with ID starting with c57770ecc232b3041e5ccee164bc94ec7031786a572ade162f627c5905f1365b not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.604923 4820 scope.go:117] "RemoveContainer" containerID="0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.605130 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e"} err="failed to get container status \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": rpc error: code = NotFound desc = could not find container \"0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e\": container with ID starting with 0752622af7794871e40477d56e8052b9fb122c056c6d461bc50a563f81c7876e not found: ID does not exist" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.609518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data" (OuterVolumeSpecName: "config-data") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.628225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.638539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.646702 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f10708bc-02ad-4956-95a6-ae03aa172988" (UID: "f10708bc-02ad-4956-95a6-ae03aa172988"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684080 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684110 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684121 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684130 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qbqd\" (UniqueName: \"kubernetes.io/projected/f10708bc-02ad-4956-95a6-ae03aa172988-kube-api-access-4qbqd\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.684140 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10708bc-02ad-4956-95a6-ae03aa172988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.889980 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.898649 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.917880 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918298 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918315 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918327 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918335 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918357 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918378 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: E0221 07:08:33.918404 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="init" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918412 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="init" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918588 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc801035-b5e1-4e87-b8a1-c1d9474466c5" containerName="dnsmasq-dns" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918598 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-api" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918607 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" containerName="nova-manage" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.918624 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" containerName="nova-api-log" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.919509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.923723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.924168 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.931167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:33 crc kubenswrapper[4820]: I0221 07:08:33.931856 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.089841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090518 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.090732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.191984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.192842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.196691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.201585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.210904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"nova-api-0\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.290590 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:08:34 crc kubenswrapper[4820]: I0221 07:08:34.740037 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:08:34 crc kubenswrapper[4820]: W0221 07:08:34.746337 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e16d52c_9322_49cf_9948_8d1c56c0a5ed.slice/crio-c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0 WatchSource:0}: Error finding container c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0: Status 404 returned error can't find the container with id c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0 Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.586882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerStarted","Data":"c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0"} Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.609353 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.609334309 podStartE2EDuration="2.609334309s" podCreationTimestamp="2026-02-21 07:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:35.605429702 +0000 UTC m=+1290.638513920" watchObservedRunningTime="2026-02-21 07:08:35.609334309 +0000 UTC m=+1290.642418507" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.707918 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10708bc-02ad-4956-95a6-ae03aa172988" path="/var/lib/kubelet/pods/f10708bc-02ad-4956-95a6-ae03aa172988/volumes" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.940007 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:50358->10.217.0.197:8775: read: connection reset by peer" Feb 21 07:08:35 crc kubenswrapper[4820]: I0221 07:08:35.940048 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:50360->10.217.0.197:8775: read: connection reset by peer" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.377188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530944 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.530986 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") pod \"5ec23217-e99b-4b39-8be4-d4278275c14b\" (UID: \"5ec23217-e99b-4b39-8be4-d4278275c14b\") " Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531299 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs" (OuterVolumeSpecName: "logs") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.531657 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec23217-e99b-4b39-8be4-d4278275c14b-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.537739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb" (OuterVolumeSpecName: "kube-api-access-4chfb") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "kube-api-access-4chfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.560038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.566840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data" (OuterVolumeSpecName: "config-data") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.590005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5ec23217-e99b-4b39-8be4-d4278275c14b" (UID: "5ec23217-e99b-4b39-8be4-d4278275c14b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601450 4820 generic.go:334] "Generic (PLEG): container finished" podID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" exitCode=0 Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec23217-e99b-4b39-8be4-d4278275c14b","Type":"ContainerDied","Data":"394bcc8e3a2a160440b4df43c43758efe0ea6054acb1691dc8c447d26df83e49"} Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.601653 4820 scope.go:117] "RemoveContainer" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633006 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633229 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chfb\" (UniqueName: \"kubernetes.io/projected/5ec23217-e99b-4b39-8be4-d4278275c14b-kube-api-access-4chfb\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633345 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.633427 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec23217-e99b-4b39-8be4-d4278275c14b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.660799 4820 scope.go:117] "RemoveContainer" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.668020 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.682083 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.684893 4820 scope.go:117] "RemoveContainer" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.685571 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": container with ID starting with a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e not found: ID does not exist" containerID="a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.685614 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e"} err="failed to get container status \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": rpc error: code = NotFound desc = could not find container \"a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e\": container with ID starting with a931e7510eb1b50c244cb8e103c4ece70b466c5d66ff5fec0f21a8b2c879668e not found: ID does not exist" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.685639 4820 scope.go:117] "RemoveContainer" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.685974 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": container with ID starting with 4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d not found: ID does not exist" containerID="4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.686018 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d"} err="failed to get container status \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": rpc error: code = NotFound desc = could not find container \"4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d\": container with ID starting with 4ed7b539006f1466332587a20f5dbaa7bac5b05757cea5a9d14c2ebd1f1ea32d not found: ID does not exist" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.697488 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.698092 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: E0221 07:08:36.698290 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698351 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698602 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-metadata" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.698681 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" containerName="nova-metadata-log" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.699724 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.711532 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.711566 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.723859 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.836948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.938988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.939791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.941858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.942914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.943037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:36 crc kubenswrapper[4820]: I0221 07:08:36.954821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"nova-metadata-0\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " pod="openstack/nova-metadata-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.014982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.488112 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.618674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36"} Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.620932 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce77df06-566a-45b8-83f6-b788a3c81757" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" exitCode=0 Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.620978 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerDied","Data":"e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec"} Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.716015 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec23217-e99b-4b39-8be4-d4278275c14b" path="/var/lib/kubelet/pods/5ec23217-e99b-4b39-8be4-d4278275c14b/volumes" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.833650 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961672 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.961880 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") pod \"ce77df06-566a-45b8-83f6-b788a3c81757\" (UID: \"ce77df06-566a-45b8-83f6-b788a3c81757\") " Feb 21 07:08:37 crc kubenswrapper[4820]: I0221 07:08:37.968939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn" (OuterVolumeSpecName: "kube-api-access-gp4xn") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "kube-api-access-gp4xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.000191 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data" (OuterVolumeSpecName: "config-data") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.005463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce77df06-566a-45b8-83f6-b788a3c81757" (UID: "ce77df06-566a-45b8-83f6-b788a3c81757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064602 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064638 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp4xn\" (UniqueName: \"kubernetes.io/projected/ce77df06-566a-45b8-83f6-b788a3c81757-kube-api-access-gp4xn\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.064648 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce77df06-566a-45b8-83f6-b788a3c81757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.632699 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.633038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerStarted","Data":"4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce77df06-566a-45b8-83f6-b788a3c81757","Type":"ContainerDied","Data":"a44bc367b522a88e4a26e286fbe21e659f7f448ff1d55ea084262764e2d9e675"} Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634497 4820 scope.go:117] "RemoveContainer" containerID="e01a7100792a20e8877ccd3c413a5aa8fea200e04710bdbbf4d885e67d9155ec" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.634513 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.670425 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.670399742 podStartE2EDuration="2.670399742s" podCreationTimestamp="2026-02-21 07:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:38.659744889 +0000 UTC m=+1293.692829097" watchObservedRunningTime="2026-02-21 07:08:38.670399742 +0000 UTC m=+1293.703483940" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.688736 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.703497 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714172 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: E0221 07:08:38.714612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714636 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.714880 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" containerName="nova-scheduler-scheduler" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.715731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.724540 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.725636 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.878756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.980847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.980928 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.981000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:38 crc kubenswrapper[4820]: I0221 07:08:38.991378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.004638 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.006884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"nova-scheduler-0\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.034328 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.441548 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:08:39 crc kubenswrapper[4820]: W0221 07:08:39.447591 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca75969_e299_435a_a607_d470d4ab831e.slice/crio-fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf WatchSource:0}: Error finding container fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf: Status 404 returned error can't find the container with id fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.643554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerStarted","Data":"fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf"} Feb 21 07:08:39 crc kubenswrapper[4820]: I0221 07:08:39.706898 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce77df06-566a-45b8-83f6-b788a3c81757" path="/var/lib/kubelet/pods/ce77df06-566a-45b8-83f6-b788a3c81757/volumes" Feb 21 07:08:40 crc kubenswrapper[4820]: I0221 07:08:40.656990 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerStarted","Data":"f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd"} Feb 21 07:08:40 crc kubenswrapper[4820]: I0221 07:08:40.692990 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.692966942 podStartE2EDuration="2.692966942s" podCreationTimestamp="2026-02-21 07:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:08:40.682776042 +0000 UTC m=+1295.715860250" watchObservedRunningTime="2026-02-21 07:08:40.692966942 +0000 UTC m=+1295.726051140" Feb 21 07:08:42 crc kubenswrapper[4820]: I0221 07:08:42.015641 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:08:42 crc kubenswrapper[4820]: I0221 07:08:42.015728 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.034661 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.292317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:44 crc kubenswrapper[4820]: I0221 07:08:44.292812 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 07:08:45 crc kubenswrapper[4820]: I0221 07:08:45.310557 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:45 crc kubenswrapper[4820]: I0221 07:08:45.310574 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:47 crc kubenswrapper[4820]: I0221 07:08:47.015246 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:47 crc kubenswrapper[4820]: I0221 07:08:47.015946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 07:08:48 crc kubenswrapper[4820]: I0221 07:08:48.029917 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:48 crc kubenswrapper[4820]: I0221 07:08:48.029975 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.034965 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.059997 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 07:08:49 crc kubenswrapper[4820]: I0221 07:08:49.797007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 07:08:53 crc kubenswrapper[4820]: I0221 07:08:53.862727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.298025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.299509 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.299574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.305961 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.815437 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 07:08:54 crc kubenswrapper[4820]: I0221 07:08:54.822729 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.023349 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.025161 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.027878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.847381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.956440 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:57 crc kubenswrapper[4820]: I0221 07:08:57.956762 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" containerID="cri-o://c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" gracePeriod=30 Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.455946 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.553589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") pod \"df55e56a-dbd2-4082-8915-c095d79a0445\" (UID: \"df55e56a-dbd2-4082-8915-c095d79a0445\") " Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.558316 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf" (OuterVolumeSpecName: "kube-api-access-pz5jf") pod "df55e56a-dbd2-4082-8915-c095d79a0445" (UID: "df55e56a-dbd2-4082-8915-c095d79a0445"). InnerVolumeSpecName "kube-api-access-pz5jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.656522 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz5jf\" (UniqueName: \"kubernetes.io/projected/df55e56a-dbd2-4082-8915-c095d79a0445-kube-api-access-pz5jf\") on node \"crc\" DevicePath \"\"" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.847867 4820 generic.go:334] "Generic (PLEG): container finished" podID="df55e56a-dbd2-4082-8915-c095d79a0445" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" exitCode=2 Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848052 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerDied","Data":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df55e56a-dbd2-4082-8915-c095d79a0445","Type":"ContainerDied","Data":"60eb280dafd317b213ced0ce92cb061208211ecad999bed743c8a76df9e0ad8d"} Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.848158 4820 scope.go:117] "RemoveContainer" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.872961 4820 scope.go:117] "RemoveContainer" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: E0221 07:08:58.873461 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": container with ID starting with c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804 not found: ID does not exist" containerID="c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.873506 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804"} err="failed to get container status \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": rpc error: code = NotFound desc = could not find container \"c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804\": container with ID starting with c6e856467196d01f45b2c409e9c0c42e54f9f968b27f724b628dc5fdbc8d4804 not found: ID does not exist" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.891028 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.905145 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.915979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: E0221 07:08:58.916546 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.916570 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.916810 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" containerName="kube-state-metrics" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.925931 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.926095 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.927749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 21 07:08:58 crc kubenswrapper[4820]: I0221 07:08:58.928120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063333 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063582 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.063872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.165702 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.172843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.186453 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"kube-state-metrics-0\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.252853 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.612182 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616672 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" containerID="cri-o://131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616766 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" containerID="cri-o://f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.616831 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" containerID="cri-o://986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.617024 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" containerID="cri-o://aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" gracePeriod=30 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.680134 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:08:59 crc kubenswrapper[4820]: W0221 07:08:59.684736 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb570ff_2a5e_4913_a84f_346579eaa104.slice/crio-71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8 WatchSource:0}: Error finding container 71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8: Status 404 returned error can't find the container with id 71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.687010 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.711808 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df55e56a-dbd2-4082-8915-c095d79a0445" path="/var/lib/kubelet/pods/df55e56a-dbd2-4082-8915-c095d79a0445/volumes" Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862953 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" exitCode=0 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862989 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" exitCode=2 Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.862979 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.863031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} Feb 21 07:08:59 crc kubenswrapper[4820]: I0221 07:08:59.866425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerStarted","Data":"71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.888953 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" exitCode=0 Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.889076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.892063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerStarted","Data":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.892294 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 07:09:00 crc kubenswrapper[4820]: I0221 07:09:00.909524 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.53959628 podStartE2EDuration="2.909506853s" podCreationTimestamp="2026-02-21 07:08:58 +0000 UTC" firstStartedPulling="2026-02-21 07:08:59.686774779 +0000 UTC m=+1314.719858977" lastFinishedPulling="2026-02-21 07:09:00.056685352 +0000 UTC m=+1315.089769550" observedRunningTime="2026-02-21 07:09:00.907904879 +0000 UTC m=+1315.940989077" watchObservedRunningTime="2026-02-21 07:09:00.909506853 +0000 UTC m=+1315.942591041" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.922826 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924487 4820 generic.go:334] "Generic (PLEG): container finished" podID="118c08af-2bde-440a-a9cf-ad089288aae6" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" exitCode=0 Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"118c08af-2bde-440a-a9cf-ad089288aae6","Type":"ContainerDied","Data":"a3ebef633d35f88845de4a2b21cdc59126d4fd49e7842d92e5d5e3974ce0962e"} Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.924644 4820 scope.go:117] "RemoveContainer" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:02 crc kubenswrapper[4820]: I0221 07:09:02.951653 4820 scope.go:117] "RemoveContainer" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.000858 4820 scope.go:117] "RemoveContainer" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.032606 4820 scope.go:117] "RemoveContainer" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040041 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040134 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.040390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") pod \"118c08af-2bde-440a-a9cf-ad089288aae6\" (UID: \"118c08af-2bde-440a-a9cf-ad089288aae6\") " Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.041556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.041735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.046757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k" (OuterVolumeSpecName: "kube-api-access-zzf7k") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "kube-api-access-zzf7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.047816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts" (OuterVolumeSpecName: "scripts") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054038 4820 scope.go:117] "RemoveContainer" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.054765 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": container with ID starting with f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff not found: ID does not exist" containerID="f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054845 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff"} err="failed to get container status \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": rpc error: code = NotFound desc = could not find container \"f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff\": container with ID starting with f66be6b40e3df015f95d86802b2334e02afe9d2192642e866c83b1ab455875ff not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.054879 4820 scope.go:117] "RemoveContainer" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.056305 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": container with ID starting with aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46 not found: ID does not exist" containerID="aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056383 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46"} err="failed to get container status \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": rpc error: code = NotFound desc = could not find container \"aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46\": container with ID starting with aee88b854da2ff7875ee5941c62698f983b60e0bf8bc28f0322e20ecca600c46 not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056436 4820 scope.go:117] "RemoveContainer" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.056793 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": container with ID starting with 986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5 not found: ID does not exist" containerID="986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056822 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5"} err="failed to get container status \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": rpc error: code = NotFound desc = could not find container \"986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5\": container with ID starting with 986617b03af2c753ef9e204c57b69befe62c3807a4f4d816cfdc35b5937535d5 not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.056841 4820 scope.go:117] "RemoveContainer" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.057219 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": container with ID starting with 131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d not found: ID does not exist" containerID="131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.057268 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d"} err="failed to get container status \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": rpc error: code = NotFound desc = could not find container \"131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d\": container with ID starting with 131ad736be03732556c0546ca430934c9f0b5215f7a7e4a378d24827d158737d not found: ID does not exist" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.071771 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.116645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.138475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data" (OuterVolumeSpecName: "config-data") pod "118c08af-2bde-440a-a9cf-ad089288aae6" (UID: "118c08af-2bde-440a-a9cf-ad089288aae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143218 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143268 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143280 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/118c08af-2bde-440a-a9cf-ad089288aae6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143289 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143298 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143309 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzf7k\" (UniqueName: \"kubernetes.io/projected/118c08af-2bde-440a-a9cf-ad089288aae6-kube-api-access-zzf7k\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.143318 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/118c08af-2bde-440a-a9cf-ad089288aae6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.933942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.970480 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.980653 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990653 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990673 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990679 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990693 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990698 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: E0221 07:09:03.990712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990718 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990876 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="proxy-httpd" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990891 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="sg-core" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990903 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-central-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.990913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" containerName="ceilometer-notification-agent" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.992522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995483 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 07:09:03 crc kubenswrapper[4820]: I0221 07:09:03.995655 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.001159 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.061611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.061875 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.062622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164853 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.164985 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.165857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.168916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.169229 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.169876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.170530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.170979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.182550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"ceilometer-0\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.315132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.770647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:09:04 crc kubenswrapper[4820]: I0221 07:09:04.944708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"0c38be7124a920b640712dd690755259fce0c90bcf50290cc80460e97c079adc"} Feb 21 07:09:05 crc kubenswrapper[4820]: I0221 07:09:05.712612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118c08af-2bde-440a-a9cf-ad089288aae6" path="/var/lib/kubelet/pods/118c08af-2bde-440a-a9cf-ad089288aae6/volumes" Feb 21 07:09:05 crc kubenswrapper[4820]: I0221 07:09:05.955438 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} Feb 21 07:09:06 crc kubenswrapper[4820]: I0221 07:09:06.965473 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} Feb 21 07:09:06 crc kubenswrapper[4820]: I0221 07:09:06.966429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} Feb 21 07:09:08 crc kubenswrapper[4820]: I0221 07:09:08.989727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerStarted","Data":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} Feb 21 07:09:08 crc kubenswrapper[4820]: I0221 07:09:08.990371 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 07:09:09 crc kubenswrapper[4820]: I0221 07:09:09.025266 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.868805319 podStartE2EDuration="6.025248873s" podCreationTimestamp="2026-02-21 07:09:03 +0000 UTC" firstStartedPulling="2026-02-21 07:09:04.777179167 +0000 UTC m=+1319.810263365" lastFinishedPulling="2026-02-21 07:09:07.933622711 +0000 UTC m=+1322.966706919" observedRunningTime="2026-02-21 07:09:09.010385975 +0000 UTC m=+1324.043470193" watchObservedRunningTime="2026-02-21 07:09:09.025248873 +0000 UTC m=+1324.058333081" Feb 21 07:09:09 crc kubenswrapper[4820]: I0221 07:09:09.268168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 07:09:34 crc kubenswrapper[4820]: I0221 07:09:34.323435 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.296626 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.298338 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.324622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.347509 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.365463 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.366854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.372485 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.393378 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.433326 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cd19-account-create-update-ccc55"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.464987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.465057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586144 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.586287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.587123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.587917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.596391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.641877 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"glance-cd19-account-create-update-77csv\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.652865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"root-account-create-update-bcvpx\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.690718 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.731493 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0b59ad-da5f-4279-8aa4-f56bd575a5ce" path="/var/lib/kubelet/pods/0d0b59ad-da5f-4279-8aa4-f56bd575a5ce/volumes" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.732073 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.746574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.748510 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.750569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.789029 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793937 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.793987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.794102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.840841 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.882210 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895646 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.895743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.896988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.901091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.905427 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.906182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.908557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.913172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.918430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.921131 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.922928 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.936044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"barbican-keystone-listener-7b6747758b-gs56z\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.936662 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.956802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"barbican-worker-67dd4454fc-lr4lq\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.960282 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bzcnx"] Feb 21 07:09:55 crc kubenswrapper[4820]: I0221 07:09:55.997286 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.002299 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.002383 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:56.502369334 +0000 UTC m=+1371.535453532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.003829 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.007862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.018649 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.018846 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" containerID="cri-o://909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" gracePeriod=2 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.025968 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.028321 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.048370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.051563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.077076 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.077784 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.077801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.078116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d6374d-1595-4586-b161-d199a2b39068" containerName="openstackclient" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.079165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.106479 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.130870 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.132553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.159267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.179935 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.180220 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" containerID="cri-o://803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" gracePeriod=30 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.180411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" containerID="cri-o://0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" gracePeriod=30 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.201025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220216 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.220286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.220532 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.220637 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:56.720615748 +0000 UTC m=+1371.753699956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.228994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359119 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.359424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.360606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.363129 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.384262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.403698 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c516-account-create-update-mxhpl"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.460980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.461070 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.461939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.483541 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.521301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.528975 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vfn4b"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.531409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"barbican-4e9a-account-create-update-4996c\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.557049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"neutron-c516-account-create-update-vrfb9\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.558852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"placement-c8ba-account-create-update-4wwws\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.592191 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.595149 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" containerID="cri-o://087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" gracePeriod=300 Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.603324 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.603416 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:57.60338754 +0000 UTC m=+1372.636471738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.650803 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c8ba-account-create-update-wmp66"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.654695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.730780 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.731106 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.757865 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.761201 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.782947 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5knjn"] Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.810393 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: E0221 07:09:56.810480 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:57.810457019 +0000 UTC m=+1372.843541217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.810820 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4e9a-account-create-update-55xqx"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.896534 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.918481 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" containerID="cri-o://9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" gracePeriod=300 Feb 21 07:09:56 crc kubenswrapper[4820]: I0221 07:09:56.943487 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" containerID="cri-o://763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" gracePeriod=300 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.022684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.077933 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a80b-account-create-update-n9j8x"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.109777 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.169585 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" containerID="cri-o://e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" gracePeriod=300 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.185690 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lj8d2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.230753 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.239415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.242805 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.273623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.297414 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.308508 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wdvf7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.322107 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.322424 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-p2v97" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" containerID="cri-o://4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.352140 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.366865 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.391381 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.391668 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" containerID="cri-o://3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.392112 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" containerID="cri-o://275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.407727 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.408032 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" containerID="cri-o://9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.409660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" containerID="cri-o://765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.419173 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.431634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.431768 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.453317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6ecb-account-create-update-q98t2"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.479631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.488857 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.518992 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-smnkd"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.533625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.537358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.537725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.544662 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.560644 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-96c7-account-create-update-fhgrk"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.579343 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.579800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" containerID="cri-o://d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.580280 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" containerID="cri-o://0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.584037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"nova-api-a80b-account-create-update-w6rwf\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.612441 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.613380 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" containerID="cri-o://c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.614606 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" containerID="cri-o://c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" gracePeriod=30 Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.640559 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.647518 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.647598 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:58.147567467 +0000 UTC m=+1373.180651665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.648330 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.648310178 +0000 UTC m=+1374.681394376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.664708 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.833025 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.833111 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.833088229 +0000 UTC m=+1374.866172427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.833643 4820 generic.go:334] "Generic (PLEG): container finished" podID="899bd84b-c67f-4a89-9f92-a68094530566" containerID="9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861472 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861765 4820 generic.go:334] "Generic (PLEG): container finished" podID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerID="087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.861781 4820 generic.go:334] "Generic (PLEG): container finished" podID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerID="763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.867992 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085b95c8-2602-461b-8a08-91aff75f97a0" path="/var/lib/kubelet/pods/085b95c8-2602-461b-8a08-91aff75f97a0/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.869755 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb" path="/var/lib/kubelet/pods/6df65e7d-3ade-4585-9f5f-7a4b7c8bc8eb/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.870738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8a463c-63a8-424f-a3ab-4e46390b8cca" path="/var/lib/kubelet/pods/8e8a463c-63a8-424f-a3ab-4e46390b8cca/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.871316 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.871492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b290d702-774e-48b8-a243-5a9c648740a7" path="/var/lib/kubelet/pods/b290d702-774e-48b8-a243-5a9c648740a7/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.873547 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b400c916-2ba9-4d7e-b9f5-6044605f279c" path="/var/lib/kubelet/pods/b400c916-2ba9-4d7e-b9f5-6044605f279c/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.874705 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8063e5a-6b15-4855-9ae2-5fdcc912b472" path="/var/lib/kubelet/pods/b8063e5a-6b15-4855-9ae2-5fdcc912b472/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.875661 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe51cee-e461-4a5f-86d9-0eb600da3a82" path="/var/lib/kubelet/pods/bbe51cee-e461-4a5f-86d9-0eb600da3a82/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.881765 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.881829 4820 generic.go:334] "Generic (PLEG): container finished" podID="96d07086-c2e8-4351-bac8-b99c485826c4" containerID="4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.882100 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5" path="/var/lib/kubelet/pods/d56fa8ad-e902-4bdb-855a-6ca18bb9d1a5/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.883924 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc" path="/var/lib/kubelet/pods/e26bf6ef-8ccd-4035-a2e7-d5cd693d30dc/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.886312 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27134bb-c9b2-42d4-bad5-81e7b05874e7" path="/var/lib/kubelet/pods/e27134bb-c9b2-42d4-bad5-81e7b05874e7/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899112 4820 generic.go:334] "Generic (PLEG): container finished" podID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerID="9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" exitCode=2 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.899127 4820 generic.go:334] "Generic (PLEG): container finished" podID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerID="e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" exitCode=143 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.904193 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b995bf-93f1-4f28-a1a6-0d13ac9ca744" path="/var/lib/kubelet/pods/e2b995bf-93f1-4f28-a1a6-0d13ac9ca744/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.909492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b51414-aa8f-49ad-b662-b3c44eb0bc62" path="/var/lib/kubelet/pods/f9b51414-aa8f-49ad-b662-b3c44eb0bc62/volumes" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917167 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-rf689"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917203 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917544 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2x7vh"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.917561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerDied","Data":"4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918417 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9"} Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.918793 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.919035 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" containerID="cri-o://6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" gracePeriod=10 Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.934401 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.942481 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zwzx4"] Feb 21 07:09:57 crc kubenswrapper[4820]: E0221 07:09:57.955563 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455bfe0a_a135_4900_8b15_ce584dc8a5bb.slice/crio-763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf0c3ff8_e36f_4539_a7da_9d2b1e7a146d.slice/crio-conmon-e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef3827c2_ee55_4f86_a752_d7cbc9c6454e.slice/crio-conmon-d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96d07086_c2e8_4351_bac8_b99c485826c4.slice/crio-conmon-4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce.scope\": RecentStats: unable to find data in memory cache]" Feb 21 07:09:57 crc kubenswrapper[4820]: I0221 07:09:57.983831 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.031309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.031885 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" containerID="cri-o://ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032267 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" containerID="cri-o://4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032325 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" containerID="cri-o://697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032369 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" containerID="cri-o://adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032411 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" containerID="cri-o://b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032449 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" containerID="cri-o://15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" containerID="cri-o://143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032513 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" containerID="cri-o://1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032545 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" containerID="cri-o://5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032579 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" containerID="cri-o://c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032620 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" containerID="cri-o://956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" containerID="cri-o://472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032690 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" containerID="cri-o://8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032688 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" containerID="cri-o://6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.032807 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" containerID="cri-o://3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.063264 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k9s8t"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075185 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075432 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7796b97765-sqvtc" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" containerID="cri-o://cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.075544 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7796b97765-sqvtc" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" containerID="cri-o://89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.105476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.128545 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.129055 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85cb846b98-bwgbn" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" containerID="cri-o://eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.129463 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85cb846b98-bwgbn" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" containerID="cri-o://2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.155138 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.171070 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.184971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jng5b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.235570 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6976-account-create-update-mzpt2"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.259291 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.259369 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:09:59.259349617 +0000 UTC m=+1374.292433815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.286029 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.311417 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" containerID="cri-o://d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.332431 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lnssq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.378058 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.393394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.401130 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.409206 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j8m4b"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.413176 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.413284 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.416433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.425972 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.426227 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" containerID="cri-o://4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.426374 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" containerID="cri-o://21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.445569 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.458403 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.467388 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w9fxb"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.480007 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.486395 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.497875 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.498126 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" containerID="cri-o://23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.498283 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" containerID="cri-o://841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.506942 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.507359 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.521835 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.530821 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.531071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-867cbf55-jx754" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" containerID="cri-o://cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.531259 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-867cbf55-jx754" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" containerID="cri-o://53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542403 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542711 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" containerID="cri-o://3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.542829 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" containerID="cri-o://df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579017 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579150 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579277 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579521 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.579668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") pod \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\" (UID: \"455bfe0a-a135-4900-8b15-ce584dc8a5bb\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.582045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config" (OuterVolumeSpecName: "config") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.583841 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.587311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts" (OuterVolumeSpecName: "scripts") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.589257 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.607947 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb" (OuterVolumeSpecName: "kube-api-access-fs4qb") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "kube-api-access-fs4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.616523 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.616716 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.618230 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.634724 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" containerID="cri-o://7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" gracePeriod=604800 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.634845 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.634911 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.637610 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.637689 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.644039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.655294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.657161 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" containerID="cri-o://f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.668824 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-b68n2"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684831 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.684906 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685093 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") pod \"96d07086-c2e8-4351-bac8-b99c485826c4\" (UID: \"96d07086-c2e8-4351-bac8-b99c485826c4\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685838 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config" (OuterVolumeSpecName: "config") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.685942 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688181 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4qb\" (UniqueName: \"kubernetes.io/projected/455bfe0a-a135-4900-8b15-ce584dc8a5bb-kube-api-access-fs4qb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688514 4820 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688524 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d07086-c2e8-4351-bac8-b99c485826c4-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688533 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688543 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688551 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/96d07086-c2e8-4351-bac8-b99c485826c4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688572 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.688581 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/455bfe0a-a135-4900-8b15-ce584dc8a5bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.694678 4820 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-rwsk7" message=< Feb 21 07:09:58 crc kubenswrapper[4820]: Exiting ovsdb-server (5) [ OK ] Feb 21 07:09:58 crc kubenswrapper[4820]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.694720 4820 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 21 07:09:58 crc kubenswrapper[4820]: + source /usr/local/bin/container-scripts/functions Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNBridge=br-int Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNRemote=tcp:localhost:6642 Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNEncapType=geneve Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNAvailabilityZones= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ EnableChassisAsGateway=true Feb 21 07:09:58 crc kubenswrapper[4820]: ++ PhysicalNetworks= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ OVNHostName= Feb 21 07:09:58 crc kubenswrapper[4820]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 21 07:09:58 crc kubenswrapper[4820]: ++ ovs_dir=/var/lib/openvswitch Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 21 07:09:58 crc kubenswrapper[4820]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 21 07:09:58 crc kubenswrapper[4820]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + sleep 0.5 Feb 21 07:09:58 crc kubenswrapper[4820]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 21 07:09:58 crc kubenswrapper[4820]: + cleanup_ovsdb_server_semaphore Feb 21 07:09:58 crc kubenswrapper[4820]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 21 07:09:58 crc kubenswrapper[4820]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 21 07:09:58 crc kubenswrapper[4820]: > pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" containerID="cri-o://355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.694751 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" containerID="cri-o://355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" gracePeriod=29 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.703830 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.704253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps" (OuterVolumeSpecName: "kube-api-access-m7xps") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "kube-api-access-m7xps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.713535 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.720427 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vdzvw"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.727948 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.728370 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" containerID="cri-o://d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.728935 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" containerID="cri-o://84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.733764 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.742378 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.742616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.775481 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pjnhh"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794284 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794533 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.794803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") pod \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\" (UID: \"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d\") " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.795360 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7xps\" (UniqueName: \"kubernetes.io/projected/96d07086-c2e8-4351-bac8-b99c485826c4-kube-api-access-m7xps\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.796723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.799387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts" (OuterVolumeSpecName: "scripts") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.800114 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config" (OuterVolumeSpecName: "config") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.805433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.814711 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.822303 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6rxdc"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.828790 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.828992 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.832035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.859497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj" (OuterVolumeSpecName: "kube-api-access-lwpjj") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "kube-api-access-lwpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.860429 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.863442 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bd4bz"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.882665 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897909 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897946 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwpjj\" (UniqueName: \"kubernetes.io/projected/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-kube-api-access-lwpjj\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897957 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897968 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.897993 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909081 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909341 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.909625 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" containerID="cri-o://498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" gracePeriod=30 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.916977 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerID="d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" exitCode=143 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.917036 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.928069 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.963183 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" exitCode=0 Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.963339 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.964621 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:58 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: if [ -n "glance" ]; then Feb 21 07:09:58 crc kubenswrapper[4820]: GRANT_DATABASE="glance" Feb 21 07:09:58 crc kubenswrapper[4820]: else Feb 21 07:09:58 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:58 crc kubenswrapper[4820]: fi Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:58 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:58 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:58 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:58 crc kubenswrapper[4820]: # support updates Feb 21 07:09:58 crc kubenswrapper[4820]: Feb 21 07:09:58 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58f6456c9f-zb9jc" event={"ID":"dc228462-9ac8-475c-859b-bbce5678a5ea","Type":"ContainerDied","Data":"d947f700f97ac52cf725cd42cbfb548fb57f713a94e9ecafa0ce141427736451"} Feb 21 07:09:58 crc kubenswrapper[4820]: I0221 07:09:58.964844 4820 scope.go:117] "RemoveContainer" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:58 crc kubenswrapper[4820]: E0221 07:09:58.966572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-cd19-account-create-update-77csv" podUID="95200e0a-ca93-4303-80af-8b950ddc8746" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.004405 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007861 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.007892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") pod \"dc228462-9ac8-475c-859b-bbce5678a5ea\" (UID: \"dc228462-9ac8-475c-859b-bbce5678a5ea\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.008670 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.012719 4820 generic.go:334] "Generic (PLEG): container finished" podID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerID="c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.012799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.027158 4820 generic.go:334] "Generic (PLEG): container finished" podID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.027250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.041835 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.043521 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.045337 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerID="eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.045430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443"} Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.045346 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.048768 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058729 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_455bfe0a-a135-4900-8b15-ce584dc8a5bb/ovsdbserver-sb/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058884 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" containerID="cri-o://437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.058909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"455bfe0a-a135-4900-8b15-ce584dc8a5bb","Type":"ContainerDied","Data":"a5054f534bcacef82cd1fa270668d60a62e37baeb241caf361f2e16ba9351a1e"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.063523 4820 generic.go:334] "Generic (PLEG): container finished" podID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerID="89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.063641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.065886 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p2v97_96d07086-c2e8-4351-bac8-b99c485826c4/openstack-network-exporter/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.066082 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p2v97" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.066492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p2v97" event={"ID":"96d07086-c2e8-4351-bac8-b99c485826c4","Type":"ContainerDied","Data":"7d34608592e5bad3ce2cdbb838b7f2d91070fccc15c351f0f966dcae95c21a16"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.068196 4820 generic.go:334] "Generic (PLEG): container finished" podID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerID="4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.068346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.072631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7" (OuterVolumeSpecName: "kube-api-access-c7dh7") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "kube-api-access-c7dh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076315 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076484 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076620 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076731 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076825 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076897 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076970 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077039 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077096 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077176 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077254 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077330 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.076464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.077935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.079291 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.079191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080328 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080669 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.080922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.081148 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cffb45b79-w6bp8" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" containerID="cri-o://a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.081554 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cffb45b79-w6bp8" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" containerID="cri-o://974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" gracePeriod=30 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.087188 4820 generic.go:334] "Generic (PLEG): container finished" podID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerID="3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.087304 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.096622 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.098662 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerID="23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.098821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.106841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerStarted","Data":"e23762ffd7ce106b9f82fdb1d0d30eef475c43de4d355359cab19dd81674c400"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.110270 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.111346 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7dh7\" (UniqueName: \"kubernetes.io/projected/dc228462-9ac8-475c-859b-bbce5678a5ea-kube-api-access-c7dh7\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.123604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" containerID="cri-o://0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" gracePeriod=604800 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.147770 4820 generic.go:334] "Generic (PLEG): container finished" podID="d7d6374d-1595-4586-b161-d199a2b39068" containerID="909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" exitCode=137 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.184192 4820 generic.go:334] "Generic (PLEG): container finished" podID="4709782f-54e7-4a78-a56e-8f58a5556501" containerID="d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.184273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.189067 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" exitCode=0 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.189129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192602 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/ovsdbserver-nb/0.log" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192724 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.192775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"df0c3ff8-e36f-4539-a7da-9d2b1e7a146d","Type":"ContainerDied","Data":"b52687043d29455f8c5ffa92bb3e6d7984a2979aaab8cd8cfdef30f5b4f361f2"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.194823 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42ba382-9e03-4f39-904e-87f4d764175c" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" exitCode=143 Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.194866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.242456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.257868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.276063 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.280988 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320806 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320845 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320857 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.320869 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.320919 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.320999 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:01.3209811 +0000 UTC m=+1376.354065298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.323069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.344077 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.350680 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.357047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.374278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config" (OuterVolumeSpecName: "config") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.383285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc228462-9ac8-475c-859b-bbce5678a5ea" (UID: "dc228462-9ac8-475c-859b-bbce5678a5ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.384755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.385492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" (UID: "df0c3ff8-e36f-4539-a7da-9d2b1e7a146d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.388519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "96d07086-c2e8-4351-bac8-b99c485826c4" (UID: "96d07086-c2e8-4351-bac8-b99c485826c4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.408933 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.411394 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.412866 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.412930 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422866 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422904 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422918 4820 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422929 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422939 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d07086-c2e8-4351-bac8-b99c485826c4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422949 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422961 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.422973 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc228462-9ac8-475c-859b-bbce5678a5ea-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.431799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "455bfe0a-a135-4900-8b15-ce584dc8a5bb" (UID: "455bfe0a-a135-4900-8b15-ce584dc8a5bb"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.516735 4820 scope.go:117] "RemoveContainer" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.525481 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/455bfe0a-a135-4900-8b15-ce584dc8a5bb-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.549606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.603587 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.613223 4820 scope.go:117] "RemoveContainer" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.614231 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": container with ID starting with 6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847 not found: ID does not exist" containerID="6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614277 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847"} err="failed to get container status \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": rpc error: code = NotFound desc = could not find container \"6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847\": container with ID starting with 6bfe0db0eb8e2c88ffac0bfbba4e53fc275e4d14babf893e2c734b412e0e3847 not found: ID does not exist" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614296 4820 scope.go:117] "RemoveContainer" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.614729 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": container with ID starting with c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61 not found: ID does not exist" containerID="c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614753 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61"} err="failed to get container status \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": rpc error: code = NotFound desc = could not find container \"c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61\": container with ID starting with c4a8ea33ec09edbb085557067ca874d8de90b69a7618cd3a43c7705fa05d1b61 not found: ID does not exist" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.614766 4820 scope.go:117] "RemoveContainer" containerID="087725d49d3eda013af8b6833f156a663fa05bd1ae58e6cd6c97f96a9a387f5e" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.626283 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.626406 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.629619 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.629660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") pod \"d7d6374d-1595-4586-b161-d199a2b39068\" (UID: \"d7d6374d-1595-4586-b161-d199a2b39068\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.654737 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.671914 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.672715 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc" (OuterVolumeSpecName: "kube-api-access-7x8xc") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "kube-api-access-7x8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.679322 4820 scope.go:117] "RemoveContainer" containerID="763a6b46ea0010465aaf5a12dc0a5759f78313371c19cbeb4189a6c04b0f99d4" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.765531 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x8xc\" (UniqueName: \"kubernetes.io/projected/d7d6374d-1595-4586-b161-d199a2b39068-kube-api-access-7x8xc\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.765680 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.765748 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.765721243 +0000 UTC m=+1378.798805451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.796913 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "placement" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="placement" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.797514 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "neutron" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="neutron" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.806691 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-c8ba-account-create-update-4wwws" podUID="0fa0449e-f842-4605-b814-1e7ede08a5b7" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.806798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-c516-account-create-update-vrfb9" podUID="67b282c5-1012-4188-bc31-b8e7e794bb77" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.835834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.836091 4820 scope.go:117] "RemoveContainer" containerID="4674ea514756bc9a67ce3b0d32627dbc628c1c0dbddfbaace5ee5ef4c003c5ce" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.836532 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:09:59 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: if [ -n "barbican" ]; then Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="barbican" Feb 21 07:09:59 crc kubenswrapper[4820]: else Feb 21 07:09:59 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:09:59 crc kubenswrapper[4820]: fi Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:09:59 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:09:59 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:09:59 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:09:59 crc kubenswrapper[4820]: # support updates Feb 21 07:09:59 crc kubenswrapper[4820]: Feb 21 07:09:59 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.838111 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-4e9a-account-create-update-4996c" podUID="6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.868765 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: E0221 07:09:59.868810 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.868796385 +0000 UTC m=+1378.901880583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.869777 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06da7378-1c64-43e9-8d97-63a92fe503fc" path="/var/lib/kubelet/pods/06da7378-1c64-43e9-8d97-63a92fe503fc/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.871023 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d96043-ca9d-4dd0-aa3e-8bcd5941a97b" path="/var/lib/kubelet/pods/10d96043-ca9d-4dd0-aa3e-8bcd5941a97b/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.871696 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3f478b-4142-46b8-a9ca-603e9e1860ac" path="/var/lib/kubelet/pods/1b3f478b-4142-46b8-a9ca-603e9e1860ac/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.872438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa19e90-7854-4eb9-9b72-26c8d0739851" path="/var/lib/kubelet/pods/1fa19e90-7854-4eb9-9b72-26c8d0739851/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.886083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.888517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.900150 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324a15c6-a903-420b-8db4-4268008c83d1" path="/var/lib/kubelet/pods/324a15c6-a903-420b-8db4-4268008c83d1/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.901131 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1b4a37-bb80-4c59-acdc-b6490c6e6c44" path="/var/lib/kubelet/pods/3b1b4a37-bb80-4c59-acdc-b6490c6e6c44/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.902175 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f798ecc-7cdf-4b7b-b8c9-0754d3391676" path="/var/lib/kubelet/pods/3f798ecc-7cdf-4b7b-b8c9-0754d3391676/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.903640 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9dd869-f673-4077-b345-05b4e79eb590" path="/var/lib/kubelet/pods/4b9dd869-f673-4077-b345-05b4e79eb590/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.905738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d96a68b-1b90-4fcd-9716-679be14d3157" path="/var/lib/kubelet/pods/4d96a68b-1b90-4fcd-9716-679be14d3157/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.918901 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7e07b2-8561-41da-9c7f-ea5d80280d0a" path="/var/lib/kubelet/pods/9f7e07b2-8561-41da-9c7f-ea5d80280d0a/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.931088 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf044875-b3ef-48f5-b802-1bd167de5685" path="/var/lib/kubelet/pods/cf044875-b3ef-48f5-b802-1bd167de5685/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.932084 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69a9369-affe-4441-bf33-3c0f13540875" path="/var/lib/kubelet/pods/d69a9369-affe-4441-bf33-3c0f13540875/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.935849 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" path="/var/lib/kubelet/pods/df0c3ff8-e36f-4539-a7da-9d2b1e7a146d/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.936627 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1974d89-b3a1-4cc5-b113-fb39248e5bf0" path="/var/lib/kubelet/pods/e1974d89-b3a1-4cc5-b113-fb39248e5bf0/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.943030 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e610e477-7d95-4af5-be48-f8a9acd81d6a" path="/var/lib/kubelet/pods/e610e477-7d95-4af5-be48-f8a9acd81d6a/volumes" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.961954 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d7d6374d-1595-4586-b161-d199a2b39068" (UID: "d7d6374d-1595-4586-b161-d199a2b39068"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.969827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970040 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.970290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") pod \"e533e163-2ccc-4468-9083-c9bf711b0dfb\" (UID: \"e533e163-2ccc-4468-9083-c9bf711b0dfb\") " Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.973838 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.974115 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.974126 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7d6374d-1595-4586-b161-d199a2b39068-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.977356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.979419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.982159 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts" (OuterVolumeSpecName: "scripts") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:09:59 crc kubenswrapper[4820]: I0221 07:09:59.982433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn" (OuterVolumeSpecName: "kube-api-access-vkbbn") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "kube-api-access-vkbbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.009983 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.024968 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.035948 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.036022 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.072540 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 07:10:00 crc kubenswrapper[4820]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: if [ -n "nova_api" ]; then Feb 21 07:10:00 crc kubenswrapper[4820]: GRANT_DATABASE="nova_api" Feb 21 07:10:00 crc kubenswrapper[4820]: else Feb 21 07:10:00 crc kubenswrapper[4820]: GRANT_DATABASE="*" Feb 21 07:10:00 crc kubenswrapper[4820]: fi Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: # going for maximum compatibility here: Feb 21 07:10:00 crc kubenswrapper[4820]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 21 07:10:00 crc kubenswrapper[4820]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 21 07:10:00 crc kubenswrapper[4820]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 21 07:10:00 crc kubenswrapper[4820]: # support updates Feb 21 07:10:00 crc kubenswrapper[4820]: Feb 21 07:10:00 crc kubenswrapper[4820]: $MYSQL_CMD < logger="UnhandledError" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.075403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: E0221 07:10:00.075923 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-a80b-account-create-update-w6rwf" podUID="ed145514-af37-491d-bc62-2f84273b4fd0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079602 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079632 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbbn\" (UniqueName: \"kubernetes.io/projected/e533e163-2ccc-4468-9083-c9bf711b0dfb-kube-api-access-vkbbn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079644 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079652 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.079661 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e533e163-2ccc-4468-9083-c9bf711b0dfb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.112667 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data" (OuterVolumeSpecName: "config-data") pod "e533e163-2ccc-4468-9083-c9bf711b0dfb" (UID: "e533e163-2ccc-4468-9083-c9bf711b0dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.181918 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e533e163-2ccc-4468-9083-c9bf711b0dfb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.212283 4820 generic.go:334] "Generic (PLEG): container finished" podID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerID="437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.309018 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.309190 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.310658 4820 generic.go:334] "Generic (PLEG): container finished" podID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" exitCode=1 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.311433 4820 scope.go:117] "RemoveContainer" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.336162 4820 generic.go:334] "Generic (PLEG): container finished" podID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerID="974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.336195 4820 generic.go:334] "Generic (PLEG): container finished" podID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerID="a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.341702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.359764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360096 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360119 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360134 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58f6456c9f-zb9jc"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-vrfb9" event={"ID":"67b282c5-1012-4188-bc31-b8e7e794bb77","Type":"ContainerStarted","Data":"0bf5947fd1441fc936e5ed5dfa7b04468b4ee6948a25b45d63c164f9452941fa"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360162 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-77csv" event={"ID":"95200e0a-ca93-4303-80af-8b950ddc8746","Type":"ContainerStarted","Data":"57cf883f5a62845b5703775e5d378694a44bfaa0c7228605211777d063adb94a"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360197 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360210 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360219 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360269 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"d5ed25326b5133c99c08fd6d1fe6d320a4913920be2b2b8d47571a1f05ab484f"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360308 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-p2v97"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.360343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.377706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-4wwws" event={"ID":"0fa0449e-f842-4605-b814-1e7ede08a5b7","Type":"ContainerStarted","Data":"7863f6fcecb57bf0d8f98b9a21144496e336d41b9b7a80cb88f8e4fa54e39a4d"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393448 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerID="24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerDied","Data":"24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b1db760-d9fc-477f-bc0b-8119d247253b","Type":"ContainerDied","Data":"6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393563 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf41331dc2f0220a2dc121fea5728deaea6c17ccff16b3eb94fe490cf7810ff" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.393737 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406146 4820 generic.go:334] "Generic (PLEG): container finished" podID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" exitCode=0 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406249 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406277 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e533e163-2ccc-4468-9083-c9bf711b0dfb","Type":"ContainerDied","Data":"26cd1076cc63a3c9ca70f42c100523437bd60b14673a32f0d582762b2e741f8a"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.406342 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429425 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429769 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" containerID="cri-o://28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429939 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" containerID="cri-o://eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.429988 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" containerID="cri-o://c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.430028 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" containerID="cri-o://e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.436833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-w6rwf" event={"ID":"ed145514-af37-491d-bc62-2f84273b4fd0","Type":"ContainerStarted","Data":"234e2a51c95b60e8bddead8141fd036173f79d8091f7c813c28f1e6875ceb592"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.484332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.484643 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" containerID="cri-o://4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" gracePeriod=30 Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488393 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.488534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") pod \"7b1db760-d9fc-477f-bc0b-8119d247253b\" (UID: \"7b1db760-d9fc-477f-bc0b-8119d247253b\") " Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.507210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-4996c" event={"ID":"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05","Type":"ContainerStarted","Data":"a87feecffdb740c28defaad0723df39d6ec5a1e99d858b490aafa6edf23d56e8"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.526058 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5" (OuterVolumeSpecName: "kube-api-access-9d9n5") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "kube-api-access-9d9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.529326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"86fa03fcf82765a136a3aab82794955988ac327e55c1a34182d75c4632f7c8fc"} Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.550414 4820 scope.go:117] "RemoveContainer" containerID="9b2390a7c05e56db19bda74dfb3d9d4dd876051e208b624fc3be25ba34452030" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.582342 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.585537 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.590708 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d9n5\" (UniqueName: \"kubernetes.io/projected/7b1db760-d9fc-477f-bc0b-8119d247253b-kube-api-access-9d9n5\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.608293 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": read tcp 10.217.0.2:40096->10.217.0.170:8776: read: connection reset by peer" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.665364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.701517 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.901640 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data" (OuterVolumeSpecName: "config-data") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.908771 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.919262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:00 crc kubenswrapper[4820]: I0221 07:10:00.932604 4820 scope.go:117] "RemoveContainer" containerID="e5bf8c6230a3cf28cb4d6810d400ab586125f96f1e1d8e1e052c5ad5a57074e9" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.011092 4820 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.037626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "7b1db760-d9fc-477f-bc0b-8119d247253b" (UID: "7b1db760-d9fc-477f-bc0b-8119d247253b"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.113935 4820 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1db760-d9fc-477f-bc0b-8119d247253b-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.394928 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.401497 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.420991 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.421047 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.421030822 +0000 UTC m=+1380.454115020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.437889 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b298-account-create-update-wh2wv"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.443983 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.446889 4820 scope.go:117] "RemoveContainer" containerID="909cf351ee5d3a426633b14e5a872b68e1e1f2b2e35b195ce445cb68523c8342" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.471065 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.471590 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" containerID="cri-o://a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.479401 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.483426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489322 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489816 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489832 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="init" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489838 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="init" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489852 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489858 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489870 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489878 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489887 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489893 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489906 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489911 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489922 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="mysql-bootstrap" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489927 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="mysql-bootstrap" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489936 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489954 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489959 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489971 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.489987 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.489993 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490006 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490018 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490023 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.490033 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490039 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490195 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490209 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490220 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" containerName="dnsmasq-dns" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490231 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="cinder-scheduler" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490259 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490269 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-server" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490280 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="ovsdbserver-nb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490290 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" containerName="probe" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490302 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" containerName="galera" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490311 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" containerName="proxy-httpd" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490322 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" containerName="ovsdbserver-sb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490351 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0c3ff8-e36f-4539-a7da-9d2b1e7a146d" containerName="openstack-network-exporter" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.490969 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.494825 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.507141 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.521912 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522097 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522135 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.522572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") pod \"b81af4bd-d2af-4a26-8f4d-a3e612778607\" (UID: \"b81af4bd-d2af-4a26-8f4d-a3e612778607\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.523253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.534729 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.535505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536208 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536257 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.536262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.539281 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.545637 4820 scope.go:117] "RemoveContainer" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.559765 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.562925 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2" (OuterVolumeSpecName: "kube-api-access-cnbv2") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "kube-api-access-cnbv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.572298 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-68q2w"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.617031 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.629797 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.641405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") pod \"67b282c5-1012-4188-bc31-b8e7e794bb77\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") pod \"ed145514-af37-491d-bc62-2f84273b4fd0\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642682 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") pod \"ed145514-af37-491d-bc62-2f84273b4fd0\" (UID: \"ed145514-af37-491d-bc62-2f84273b4fd0\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.642733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") pod \"0fa0449e-f842-4605-b814-1e7ede08a5b7\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") pod \"95200e0a-ca93-4303-80af-8b950ddc8746\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") pod \"95200e0a-ca93-4303-80af-8b950ddc8746\" (UID: \"95200e0a-ca93-4303-80af-8b950ddc8746\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") pod \"0fa0449e-f842-4605-b814-1e7ede08a5b7\" (UID: \"0fa0449e-f842-4605-b814-1e7ede08a5b7\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") pod \"67b282c5-1012-4188-bc31-b8e7e794bb77\" (UID: \"67b282c5-1012-4188-bc31-b8e7e794bb77\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.643945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") pod \"9235cff6-e0e8-471a-9377-26dfcfd84dac\" (UID: \"9235cff6-e0e8-471a-9377-26dfcfd84dac\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644526 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.644784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645156 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645203 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnbv2\" (UniqueName: \"kubernetes.io/projected/b81af4bd-d2af-4a26-8f4d-a3e612778607-kube-api-access-cnbv2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645218 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645332 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67b282c5-1012-4188-bc31-b8e7e794bb77" (UID: "67b282c5-1012-4188-bc31-b8e7e794bb77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.645923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed145514-af37-491d-bc62-2f84273b4fd0" (UID: "ed145514-af37-491d-bc62-2f84273b4fd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.647232 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.647273 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b81af4bd-d2af-4a26-8f4d-a3e612778607-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.654551 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.657421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95200e0a-ca93-4303-80af-8b950ddc8746" (UID: "95200e0a-ca93-4303-80af-8b950ddc8746"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.664088 4820 scope.go:117] "RemoveContainer" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.664779 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fa0449e-f842-4605-b814-1e7ede08a5b7" (UID: "0fa0449e-f842-4605-b814-1e7ede08a5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.665052 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.665103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.668483 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.669629 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.669888 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw" (OuterVolumeSpecName: "kube-api-access-gmtfw") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "kube-api-access-gmtfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.670827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a80b-account-create-update-w6rwf" event={"ID":"ed145514-af37-491d-bc62-2f84273b4fd0","Type":"ContainerDied","Data":"234e2a51c95b60e8bddead8141fd036173f79d8091f7c813c28f1e6875ceb592"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.670931 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a80b-account-create-update-w6rwf" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.676216 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s76l5"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688603 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688628 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" exitCode=2 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688637 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.688728 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.690213 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8ba-account-create-update-4wwws" event={"ID":"0fa0449e-f842-4605-b814-1e7ede08a5b7","Type":"ContainerDied","Data":"7863f6fcecb57bf0d8f98b9a21144496e336d41b9b7a80cb88f8e4fa54e39a4d"} Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.690304 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8ba-account-create-update-4wwws" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.693190 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.693495 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-665c5b9dff-g2t96" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" containerID="cri-o://200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.711334 4820 generic.go:334] "Generic (PLEG): container finished" podID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" exitCode=2 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.711486 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.730192 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx" (OuterVolumeSpecName: "kube-api-access-76nqx") pod "ed145514-af37-491d-bc62-2f84273b4fd0" (UID: "ed145514-af37-491d-bc62-2f84273b4fd0"). InnerVolumeSpecName "kube-api-access-76nqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.734657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.734832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk" (OuterVolumeSpecName: "kube-api-access-js7bk") pod "0fa0449e-f842-4605-b814-1e7ede08a5b7" (UID: "0fa0449e-f842-4605-b814-1e7ede08a5b7"). InnerVolumeSpecName "kube-api-access-js7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.737589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7" (OuterVolumeSpecName: "kube-api-access-lgms7") pod "95200e0a-ca93-4303-80af-8b950ddc8746" (UID: "95200e0a-ca93-4303-80af-8b950ddc8746"). InnerVolumeSpecName "kube-api-access-lgms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.737657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c" (OuterVolumeSpecName: "kube-api-access-66b7c") pod "67b282c5-1012-4188-bc31-b8e7e794bb77" (UID: "67b282c5-1012-4188-bc31-b8e7e794bb77"). InnerVolumeSpecName "kube-api-access-66b7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.749303 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750487 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750627 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") pod \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750765 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") pod \"9eb570ff-2a5e-4913-a84f-346579eaa104\" (UID: \"9eb570ff-2a5e-4913-a84f-346579eaa104\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.750895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") pod \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\" (UID: \"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05\") " Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.751915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752195 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed145514-af37-491d-bc62-2f84273b4fd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js7bk\" (UniqueName: \"kubernetes.io/projected/0fa0449e-f842-4605-b814-1e7ede08a5b7-kube-api-access-js7bk\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752238 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95200e0a-ca93-4303-80af-8b950ddc8746-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752275 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752289 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgms7\" (UniqueName: \"kubernetes.io/projected/95200e0a-ca93-4303-80af-8b950ddc8746-kube-api-access-lgms7\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752300 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9235cff6-e0e8-471a-9377-26dfcfd84dac-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752317 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fa0449e-f842-4605-b814-1e7ede08a5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752328 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66b7c\" (UniqueName: \"kubernetes.io/projected/67b282c5-1012-4188-bc31-b8e7e794bb77-kube-api-access-66b7c\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752340 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmtfw\" (UniqueName: \"kubernetes.io/projected/9235cff6-e0e8-471a-9377-26dfcfd84dac-kube-api-access-gmtfw\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752362 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67b282c5-1012-4188-bc31-b8e7e794bb77-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.752377 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76nqx\" (UniqueName: \"kubernetes.io/projected/ed145514-af37-491d-bc62-2f84273b4fd0-kube-api-access-76nqx\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.769097 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.769192 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:02.269161799 +0000 UTC m=+1377.302245997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.769959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" (UID: "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.783599 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.783668 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:02.283643024 +0000 UTC m=+1377.316727222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.818430 4820 scope.go:117] "RemoveContainer" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.819577 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": container with ID starting with 275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074 not found: ID does not exist" containerID="275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.819616 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074"} err="failed to get container status \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": rpc error: code = NotFound desc = could not find container \"275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074\": container with ID starting with 275c531f132b262362430d3d28d449be043d3ca648e2930b8315047400ba1074 not found: ID does not exist" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.819660 4820 scope.go:117] "RemoveContainer" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.831072 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9" (OuterVolumeSpecName: "kube-api-access-wlsm9") pod "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" (UID: "6fbdfb60-d58f-4949-a33c-f17e9ea2cd05"). InnerVolumeSpecName "kube-api-access-wlsm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: E0221 07:10:01.833589 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": container with ID starting with 3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677 not found: ID does not exist" containerID="3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.833638 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677"} err="failed to get container status \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": rpc error: code = NotFound desc = could not find container \"3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677\": container with ID starting with 3ca2104451b97daf07de5118191dd252e2687b1faeeabb63a63be9d306d70677 not found: ID does not exist" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.835292 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455bfe0a-a135-4900-8b15-ce584dc8a5bb" path="/var/lib/kubelet/pods/455bfe0a-a135-4900-8b15-ce584dc8a5bb/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.837344 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d07086-c2e8-4351-bac8-b99c485826c4" path="/var/lib/kubelet/pods/96d07086-c2e8-4351-bac8-b99c485826c4/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.838647 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9866838-084f-4340-b72d-5dba3461661e" path="/var/lib/kubelet/pods/a9866838-084f-4340-b72d-5dba3461661e/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.839707 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d781b010-be2e-465d-9789-d6188ac5a30e" path="/var/lib/kubelet/pods/d781b010-be2e-465d-9789-d6188ac5a30e/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.840439 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d6374d-1595-4586-b161-d199a2b39068" path="/var/lib/kubelet/pods/d7d6374d-1595-4586-b161-d199a2b39068/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.840912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc228462-9ac8-475c-859b-bbce5678a5ea" path="/var/lib/kubelet/pods/dc228462-9ac8-475c-859b-bbce5678a5ea/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.843829 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e533e163-2ccc-4468-9083-c9bf711b0dfb" path="/var/lib/kubelet/pods/e533e163-2ccc-4468-9083-c9bf711b0dfb/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.843990 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc" (OuterVolumeSpecName: "kube-api-access-cq6fc") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-api-access-cq6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.845163 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26b24bc-e904-49a1-b2bc-d140b0032b83" path="/var/lib/kubelet/pods/f26b24bc-e904-49a1-b2bc-d140b0032b83/volumes" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.847860 4820 generic.go:334] "Generic (PLEG): container finished" podID="8c841249-7293-4826-b05f-e4a189aaef07" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6fc\" (UniqueName: \"kubernetes.io/projected/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-api-access-cq6fc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856248 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.856268 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsm9\" (UniqueName: \"kubernetes.io/projected/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05-kube-api-access-wlsm9\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.882862 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" containerID="cri-o://50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.883446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" containerID="cri-o://0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" gracePeriod=30 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.887646 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4e9a-account-create-update-4996c" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.906622 4820 generic.go:334] "Generic (PLEG): container finished" podID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerID="c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.914255 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cd19-account-create-update-77csv" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.917869 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerID="2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.923685 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cffb45b79-w6bp8" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.928156 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerID="0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.938864 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.953365 4820 generic.go:334] "Generic (PLEG): container finished" podID="899bd84b-c67f-4a89-9f92-a68094530566" containerID="765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" exitCode=0 Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.955532 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 07:10:01 crc kubenswrapper[4820]: I0221 07:10:01.964416 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c516-account-create-update-vrfb9" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.006516 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-67dd4454fc-lr4lq" podStartSLOduration=7.006496344 podStartE2EDuration="7.006496344s" podCreationTimestamp="2026-02-21 07:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:10:01.905772556 +0000 UTC m=+1376.938856754" watchObservedRunningTime="2026-02-21 07:10:02.006496344 +0000 UTC m=+1377.039580542" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.016145 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.016277 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.163221 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.177570 4820 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.189873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.206329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.217501 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284401 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284421 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.284432 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.284831 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.284882 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.284865838 +0000 UTC m=+1378.317950036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.308445 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.308682 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.308730 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.308711289 +0000 UTC m=+1378.341795487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.322419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data" (OuterVolumeSpecName: "config-data") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.337771 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.339456 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76b79c9766-s694g" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.349549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b81af4bd-d2af-4a26-8f4d-a3e612778607" (UID: "b81af4bd-d2af-4a26-8f4d-a3e612778607"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.349668 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385881 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385919 4820 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b81af4bd-d2af-4a26-8f4d-a3e612778607-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385929 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.385938 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.403039 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "9eb570ff-2a5e-4913-a84f-346579eaa104" (UID: "9eb570ff-2a5e-4913-a84f-346579eaa104"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.412081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9235cff6-e0e8-471a-9377-26dfcfd84dac" (UID: "9235cff6-e0e8-471a-9377-26dfcfd84dac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.486912 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9235cff6-e0e8-471a-9377-26dfcfd84dac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.486960 4820 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb570ff-2a5e-4913-a84f-346579eaa104-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.782363 4820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.086s" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerDied","Data":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9eb570ff-2a5e-4913-a84f-346579eaa104","Type":"ContainerDied","Data":"71365a9e22568ef1b7939e8176b425016fd726c9f3eda1b1728111b2c07781f8"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782463 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782539 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerDied","Data":"498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782905 4820 scope.go:117] "RemoveContainer" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.782994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783020 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6cfkd"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783037 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerStarted","Data":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783073 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4e9a-account-create-update-4996c" event={"ID":"6fbdfb60-d58f-4949-a33c-f17e9ea2cd05","Type":"ContainerDied","Data":"a87feecffdb740c28defaad0723df39d6ec5a1e99d858b490aafa6edf23d56e8"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783085 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cd19-account-create-update-77csv" event={"ID":"95200e0a-ca93-4303-80af-8b950ddc8746","Type":"ContainerDied","Data":"57cf883f5a62845b5703775e5d378694a44bfaa0c7228605211777d063adb94a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cffb45b79-w6bp8" event={"ID":"9235cff6-e0e8-471a-9377-26dfcfd84dac","Type":"ContainerDied","Data":"799aa64333911f7111f98ffff76ee1c66aebdf83eeaa6dc6c45e5389c74e915a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783155 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b81af4bd-d2af-4a26-8f4d-a3e612778607","Type":"ContainerDied","Data":"11ff38cd3a84b9695da2170ae34b744fdcf1335c31df7ea094d308bb6b4a401a"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.783176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c516-account-create-update-vrfb9" event={"ID":"67b282c5-1012-4188-bc31-b8e7e794bb77","Type":"ContainerDied","Data":"0bf5947fd1441fc936e5ed5dfa7b04468b4ee6948a25b45d63c164f9452941fa"} Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.783078 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-p7t2x operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-b298-account-create-update-hxmxb" podUID="5869267a-13d5-4879-a3b0-d0e12ee57b8c" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.813906 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output="command timed out" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.943601 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.944143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.944799 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.944932 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947768 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947875 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.947909 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.949805 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.949848 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.970835 4820 scope.go:117] "RemoveContainer" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: E0221 07:10:02.971076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": container with ID starting with 4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632 not found: ID does not exist" containerID="4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.971108 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632"} err="failed to get container status \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": rpc error: code = NotFound desc = could not find container \"4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632\": container with ID starting with 4540a0cec4981afec31e9e7e996208134670513fba0ffda76fa66c004e651632 not found: ID does not exist" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.971132 4820 scope.go:117] "RemoveContainer" containerID="974657f758f342af6918d1323b07f9c2cdb0b997d3d6058cb1ab6f19ab1ef80b" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.977133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.978866 4820 generic.go:334] "Generic (PLEG): container finished" podID="61de836b-112e-4002-80c7-5ab77d4b9069" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" exitCode=143 Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.978953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.981534 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85cb846b98-bwgbn" event={"ID":"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe","Type":"ContainerDied","Data":"d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08"} Feb 21 07:10:02 crc kubenswrapper[4820]: I0221 07:10:02.981578 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03e1e7ca4ed1e17b146f874e8b7c512ef17ac2c552ca38fe854b5ace6b4ef08" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005484 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" probeResult="failure" output=< Feb 21 07:10:03 crc kubenswrapper[4820]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Feb 21 07:10:03 crc kubenswrapper[4820]: > Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8c841249-7293-4826-b05f-e4a189aaef07","Type":"ContainerDied","Data":"b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.005781 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d7777c4805cb6f20d3b114fa2f8d4c4b48ab9ca066a18749eb9c88daef742c" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010446 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010560 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010645 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010698 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") pod \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\" (UID: \"ef3827c2-ee55-4f86-a752-d7cbc9c6454e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.010749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") pod \"899bd84b-c67f-4a89-9f92-a68094530566\" (UID: \"899bd84b-c67f-4a89-9f92-a68094530566\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.012516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.012786 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs" (OuterVolumeSpecName: "logs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.013075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015477 4820 generic.go:334] "Generic (PLEG): container finished" podID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerID="21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015571 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a112132d-4a29-460c-985d-b0ca2ddb1aa6","Type":"ContainerDied","Data":"a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.015616 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ce2f4d318a8be4343d1c00aa8f9b38475fee7ae1d50bf1b4be7e34360eab36" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.016431 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs" (OuterVolumeSpecName: "logs") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.022569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.023162 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.025063 4820 generic.go:334] "Generic (PLEG): container finished" podID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerID="df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.025143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.028274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899bd84b-c67f-4a89-9f92-a68094530566","Type":"ContainerDied","Data":"5cb1b96062485be8b82f57585bda85bcd24b219427b4dff91edc9fb75a52f886"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.028488 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.033461 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4" (OuterVolumeSpecName: "kube-api-access-5rnp4") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "kube-api-access-5rnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035201 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerStarted","Data":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035389 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" containerID="cri-o://5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.035570 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" containerID="cri-o://280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.037437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts" (OuterVolumeSpecName: "scripts") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.044808 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8" (OuterVolumeSpecName: "kube-api-access-4dzx8") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "kube-api-access-4dzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.071738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3827c2-ee55-4f86-a752-d7cbc9c6454e","Type":"ContainerDied","Data":"7dbefddbd7787a89f99dc670daea40f0d47cd75502d636a14167dff4a8fa59e9"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.071837 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.072119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts" (OuterVolumeSpecName: "scripts") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076674 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerID="a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerDied","Data":"a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f99a57a-608b-4678-9be5-abc4347c8bcb","Type":"ContainerDied","Data":"49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.076830 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49654605e076770c4b1f63011fc38c031abfbddaf42bcc3556d4899ef0c6f4eb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.086670 4820 generic.go:334] "Generic (PLEG): container finished" podID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" exitCode=1 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.086762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.087421 4820 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-bcvpx" secret="" err="secret \"galera-openstack-dockercfg-ldndf\" not found" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.087460 4820 scope.go:117] "RemoveContainer" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.087833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-bcvpx_openstack(73b1b012-98c9-49cf-852d-a2ff95b746cf)\"" pod="openstack/root-account-create-update-bcvpx" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096868 4820 generic.go:334] "Generic (PLEG): container finished" podID="4709782f-54e7-4a78-a56e-8f58a5556501" containerID="84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096962 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.096991 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76b79c9766-s694g" event={"ID":"4709782f-54e7-4a78-a56e-8f58a5556501","Type":"ContainerDied","Data":"c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.097005 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f2a45c53e61599400b77234d2708a02138c72044bb416b4b3506cba8df90b6" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.107646 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" podStartSLOduration=8.107618414 podStartE2EDuration="8.107618414s" podCreationTimestamp="2026-02-21 07:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:10:03.072803574 +0000 UTC m=+1378.105887782" watchObservedRunningTime="2026-02-21 07:10:03.107618414 +0000 UTC m=+1378.140702612" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116296 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rnp4\" (UniqueName: \"kubernetes.io/projected/899bd84b-c67f-4a89-9f92-a68094530566-kube-api-access-5rnp4\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116340 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116356 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899bd84b-c67f-4a89-9f92-a68094530566-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116370 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116424 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116436 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899bd84b-c67f-4a89-9f92-a68094530566-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116449 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116460 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116471 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dzx8\" (UniqueName: \"kubernetes.io/projected/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-kube-api-access-4dzx8\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.116482 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.117670 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.118064 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts podName:73b1b012-98c9-49cf-852d-a2ff95b746cf nodeName:}" failed. No retries permitted until 2026-02-21 07:10:03.618043018 +0000 UTC m=+1378.651127216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts") pod "root-account-create-update-bcvpx" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.142127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff","Type":"ContainerDied","Data":"b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.142181 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b772f97d4d573dc6a8384e377410403688e82c34f3155619a1ec77398b45ecb4" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.154725 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ca75969-e299-435a-a607-d470d4ab831e" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.154808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerDied","Data":"f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157051 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerID="841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" exitCode=0 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157339 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0e16d52c-9322-49cf-9948-8d1c56c0a5ed","Type":"ContainerDied","Data":"c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0"} Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.157357 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d5f579c82b6a6e56f3ced2d9af8224c99c222fa9ba6b80b60b56caad99b2a0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.161622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.195107 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" containerID="cri-o://8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" gracePeriod=30 Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.206219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.218716 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.218756 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.236723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data" (OuterVolumeSpecName: "config-data") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.237333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.240522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.242750 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "899bd84b-c67f-4a89-9f92-a68094530566" (UID: "899bd84b-c67f-4a89-9f92-a68094530566"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.255403 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.256484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data" (OuterVolumeSpecName: "config-data") pod "ef3827c2-ee55-4f86-a752-d7cbc9c6454e" (UID: "ef3827c2-ee55-4f86-a752-d7cbc9c6454e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.298749 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.318630 4820 scope.go:117] "RemoveContainer" containerID="a7985c1e46addff2bf4510896c079d9be02b4a1acfa0993dfb445f66ebd5f38f" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319590 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319667 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") pod \"8c841249-7293-4826-b05f-e4a189aaef07\" (UID: \"8c841249-7293-4826-b05f-e4a189aaef07\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.319978 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") pod \"keystone-b298-account-create-update-hxmxb\" (UID: \"5869267a-13d5-4879-a3b0-d0e12ee57b8c\") " pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320019 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320028 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320037 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320045 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3827c2-ee55-4f86-a752-d7cbc9c6454e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320053 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.320063 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd84b-c67f-4a89-9f92-a68094530566-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.324348 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.324449 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.324402969 +0000 UTC m=+1380.357487267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.325508 4820 projected.go:194] Error preparing data for projected volume kube-api-access-p7t2x for pod openstack/keystone-b298-account-create-update-hxmxb: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.325586 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x podName:5869267a-13d5-4879-a3b0-d0e12ee57b8c nodeName:}" failed. No retries permitted until 2026-02-21 07:10:05.325564479 +0000 UTC m=+1380.358648787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7t2x" (UniqueName: "kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x") pod "keystone-b298-account-create-update-hxmxb" (UID: "5869267a-13d5-4879-a3b0-d0e12ee57b8c") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.335983 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.355939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv" (OuterVolumeSpecName: "kube-api-access-jhdnv") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "kube-api-access-jhdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.366412 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.369789 4820 scope.go:117] "RemoveContainer" containerID="437b9754b509c1466ba129e34883f39fc42e43b2b7d6fb57366f35e57d0c3b25" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.383037 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.398953 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.400768 4820 scope.go:117] "RemoveContainer" containerID="4841d214c6aeccf3e3adc2843ea15574251aca74a386c5d68c07feac2783f7c1" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.426571 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.428356 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdnv\" (UniqueName: \"kubernetes.io/projected/8c841249-7293-4826-b05f-e4a189aaef07-kube-api-access-jhdnv\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.428372 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.456983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data" (OuterVolumeSpecName: "config-data") pod "8c841249-7293-4826-b05f-e4a189aaef07" (UID: "8c841249-7293-4826-b05f-e4a189aaef07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.461168 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.479762 4820 scope.go:117] "RemoveContainer" containerID="765217377e07f3bfb154c1825d8e9aa8ce15d008d63d260388c182a058e66b3c" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.487860 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.514623 4820 scope.go:117] "RemoveContainer" containerID="9d5edce8d453916f71c03d27dbadd27156155685e8222590f97716c227514067" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.516636 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.527038 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530550 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530656 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.530883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.532505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs" (OuterVolumeSpecName: "logs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.533591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs" (OuterVolumeSpecName: "logs") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.534309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data" (OuterVolumeSpecName: "config-data") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.548123 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts" (OuterVolumeSpecName: "scripts") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.550866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8" (OuterVolumeSpecName: "kube-api-access-tbqb8") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "kube-api-access-tbqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.550968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt" (OuterVolumeSpecName: "kube-api-access-gnmpt") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "kube-api-access-gnmpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564552 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") pod \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\" (UID: \"ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.564967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") pod \"4f99a57a-608b-4678-9be5-abc4347c8bcb\" (UID: \"4f99a57a-608b-4678-9be5-abc4347c8bcb\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.565013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.565032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") pod \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\" (UID: \"9a9bb0a5-0caa-4137-b448-a2b55d9be1ff\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.567204 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.567546 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.569583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.570140 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.570894 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c841249-7293-4826-b05f-e4a189aaef07-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.571839 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573352 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573389 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqb8\" (UniqueName: \"kubernetes.io/projected/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-kube-api-access-tbqb8\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.573969 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.574005 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.574060 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnmpt\" (UniqueName: \"kubernetes.io/projected/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-kube-api-access-gnmpt\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.580747 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts" (OuterVolumeSpecName: "scripts") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.588655 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.589325 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.603567 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.605956 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2" (OuterVolumeSpecName: "kube-api-access-mcgq2") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "kube-api-access-mcgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.612332 4820 scope.go:117] "RemoveContainer" containerID="0c7af27d09ebb00239341b37c16edf7677edec982563c281c9fa2b1e765704e3" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.633061 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-cffb45b79-w6bp8"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.638393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.644075 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.659484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674684 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674728 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674883 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674971 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.674989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675246 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") pod \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\" (UID: \"0e16d52c-9322-49cf-9948-8d1c56c0a5ed\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675585 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcgq2\" (UniqueName: \"kubernetes.io/projected/4f99a57a-608b-4678-9be5-abc4347c8bcb-kube-api-access-mcgq2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675615 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675626 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675636 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675644 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f99a57a-608b-4678-9be5-abc4347c8bcb-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.675653 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.679704 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs" (OuterVolumeSpecName: "logs") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.680066 4820 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.680161 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts podName:73b1b012-98c9-49cf-852d-a2ff95b746cf nodeName:}" failed. No retries permitted until 2026-02-21 07:10:04.680138474 +0000 UTC m=+1379.713222732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts") pod "root-account-create-update-bcvpx" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf") : configmap "openstack-scripts" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.681411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs" (OuterVolumeSpecName: "logs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.683944 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs" (OuterVolumeSpecName: "logs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.686720 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.686843 4820 scope.go:117] "RemoveContainer" containerID="d451738c8f6f4e609144531dffaae738937778e3a27f1cdf9e62e3a7d1480b96" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.687530 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm" (OuterVolumeSpecName: "kube-api-access-jhqdm") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "kube-api-access-jhqdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b" (OuterVolumeSpecName: "kube-api-access-xmf6b") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "kube-api-access-xmf6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.729855 4820 scope.go:117] "RemoveContainer" containerID="134fddb7ed13f71efcb8a67bce858e36224f138e4b68654fc6cd13c721b456f5" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.735069 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1db760-d9fc-477f-bc0b-8119d247253b" path="/var/lib/kubelet/pods/7b1db760-d9fc-477f-bc0b-8119d247253b/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.735903 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8377d0c3-40a1-4a4a-b6c8-67f66dfa602d" path="/var/lib/kubelet/pods/8377d0c3-40a1-4a4a-b6c8-67f66dfa602d/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.736564 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9235cff6-e0e8-471a-9377-26dfcfd84dac" path="/var/lib/kubelet/pods/9235cff6-e0e8-471a-9377-26dfcfd84dac/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.737982 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81af4bd-d2af-4a26-8f4d-a3e612778607" path="/var/lib/kubelet/pods/b81af4bd-d2af-4a26-8f4d-a3e612778607/volumes" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.739330 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data" (OuterVolumeSpecName: "config-data") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.746101 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.768474 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.776454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.776781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") pod \"4709782f-54e7-4a78-a56e-8f58a5556501\" (UID: \"4709782f-54e7-4a78-a56e-8f58a5556501\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777637 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777804 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.777920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778002 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778086 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778182 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") pod \"0ca75969-e299-435a-a607-d470d4ab831e\" (UID: \"0ca75969-e299-435a-a607-d470d4ab831e\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778456 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778556 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") pod \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\" (UID: \"a112132d-4a29-460c-985d-b0ca2ddb1aa6\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.778649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") pod \"6dbc8f44-c54c-42c0-8430-742c6bb61165\" (UID: \"6dbc8f44-c54c-42c0-8430-742c6bb61165\") " Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779285 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779415 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dbc8f44-c54c-42c0-8430-742c6bb61165-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779496 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhqdm\" (UniqueName: \"kubernetes.io/projected/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-kube-api-access-jhqdm\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779577 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779656 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779733 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmf6b\" (UniqueName: \"kubernetes.io/projected/4709782f-54e7-4a78-a56e-8f58a5556501-kube-api-access-xmf6b\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779800 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4709782f-54e7-4a78-a56e-8f58a5556501-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779881 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.779991 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.780088 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.780164 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785440 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785477 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a80b-account-create-update-w6rwf"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.785484 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.790302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs" (OuterVolumeSpecName: "logs") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.792434 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.792487 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data podName:fa49984a-9511-4449-adc6-997899961f73 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:11.792471158 +0000 UTC m=+1386.825555356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data") pod "rabbitmq-cell1-server-0" (UID: "fa49984a-9511-4449-adc6-997899961f73") : configmap "rabbitmq-cell1-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.804148 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.816602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.816619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq" (OuterVolumeSpecName: "kube-api-access-4nbfq") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "kube-api-access-4nbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.817784 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cd19-account-create-update-77csv"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.833507 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.835226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm" (OuterVolumeSpecName: "kube-api-access-f88cm") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "kube-api-access-f88cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.840232 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c8ba-account-create-update-4wwws"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.844497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5" (OuterVolumeSpecName: "kube-api-access-c9wf5") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "kube-api-access-c9wf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.863650 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.864896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "4f99a57a-608b-4678-9be5-abc4347c8bcb" (UID: "4f99a57a-608b-4678-9be5-abc4347c8bcb"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.871471 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4e9a-account-create-update-4996c"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883069 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883094 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883106 4820 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f99a57a-608b-4678-9be5-abc4347c8bcb-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883115 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wf5\" (UniqueName: \"kubernetes.io/projected/a112132d-4a29-460c-985d-b0ca2ddb1aa6-kube-api-access-c9wf5\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883128 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a112132d-4a29-460c-985d-b0ca2ddb1aa6-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbfq\" (UniqueName: \"kubernetes.io/projected/0ca75969-e299-435a-a607-d470d4ab831e-kube-api-access-4nbfq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883153 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f88cm\" (UniqueName: \"kubernetes.io/projected/6dbc8f44-c54c-42c0-8430-742c6bb61165-kube-api-access-f88cm\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.883242 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: E0221 07:10:03.883295 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:11.883282275 +0000 UTC m=+1386.916366473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.883639 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.889766 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c516-account-create-update-vrfb9"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.891696 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.903193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.912874 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.930592 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.937129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.939839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data" (OuterVolumeSpecName: "config-data") pod "0ca75969-e299-435a-a607-d470d4ab831e" (UID: "0ca75969-e299-435a-a607-d470d4ab831e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.954739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data" (OuterVolumeSpecName: "config-data") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.959283 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.966604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" (UID: "9a9bb0a5-0caa-4137-b448-a2b55d9be1ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.975628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data" (OuterVolumeSpecName: "config-data") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.976116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data" (OuterVolumeSpecName: "config-data") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.986191 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.988552 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989468 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989497 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989513 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989527 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989539 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989551 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989565 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989577 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca75969-e299-435a-a607-d470d4ab831e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.989588 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.990145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:03 crc kubenswrapper[4820]: I0221 07:10:03.999337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.011916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.012100 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a112132d-4a29-460c-985d-b0ca2ddb1aa6" (UID: "a112132d-4a29-460c-985d-b0ca2ddb1aa6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.023267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" (UID: "ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.031610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e16d52c-9322-49cf-9948-8d1c56c0a5ed" (UID: "0e16d52c-9322-49cf-9948-8d1c56c0a5ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.037110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data" (OuterVolumeSpecName: "config-data") pod "6dbc8f44-c54c-42c0-8430-742c6bb61165" (UID: "6dbc8f44-c54c-42c0-8430-742c6bb61165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.053651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data" (OuterVolumeSpecName: "config-data") pod "4709782f-54e7-4a78-a56e-8f58a5556501" (UID: "4709782f-54e7-4a78-a56e-8f58a5556501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.061128 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.074628 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096814 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096844 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e16d52c-9322-49cf-9948-8d1c56c0a5ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096856 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096865 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096877 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096889 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dbc8f44-c54c-42c0-8430-742c6bb61165-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096923 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4709782f-54e7-4a78-a56e-8f58a5556501-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.096931 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a112132d-4a29-460c-985d-b0ca2ddb1aa6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.170699 4820 generic.go:334] "Generic (PLEG): container finished" podID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" exitCode=143 Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.170764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ca75969-e299-435a-a607-d470d4ab831e","Type":"ContainerDied","Data":"fc21a0a7c4dd2451190e354831336d49dba3efa2b6ff9cf991a583d8861094cf"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180575 4820 scope.go:117] "RemoveContainer" containerID="f4e9a9aab5d99ba59d907f76eb3f4f7d6c16f8afc774109687191a104fbb8abd" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.180775 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.197769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" event={"ID":"6dbc8f44-c54c-42c0-8430-742c6bb61165","Type":"ContainerDied","Data":"9e23535ae9303b01da633c9a5de5b1cca080fe7244d856307bd78e440fdb1a72"} Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.197866 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79b8cb94b4-h6tqh" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200007 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76b79c9766-s694g" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.200971 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b298-account-create-update-hxmxb" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201141 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85cb846b98-bwgbn" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201394 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.201405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.205327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.238629 4820 scope.go:117] "RemoveContainer" containerID="df3a8b6f8128140f50c80025c22d3b291ab89d34796d0307384acb7c6dbbcc96" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.277685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.286820 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b298-account-create-update-hxmxb"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.294622 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.309564 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5869267a-13d5-4879-a3b0-d0e12ee57b8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.309601 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7t2x\" (UniqueName: \"kubernetes.io/projected/5869267a-13d5-4879-a3b0-d0e12ee57b8c-kube-api-access-p7t2x\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.316572 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.319299 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.400016 4820 scope.go:117] "RemoveContainer" containerID="3778b0182306b15cbf9e09e147e68dd7624053483e32182b3d2bbe64c15bf395" Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.421525 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.424081 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.427116 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 21 07:10:04 crc kubenswrapper[4820]: E0221 07:10:04.427160 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.443130 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.454516 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.471448 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.482686 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76b79c9766-s694g"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.522468 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.532043 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85cb846b98-bwgbn"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.540535 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.548274 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.554317 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.560083 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.565964 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.572405 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.577763 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.586538 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-79b8cb94b4-h6tqh"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.594390 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.602573 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.641641 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.720636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") pod \"73b1b012-98c9-49cf-852d-a2ff95b746cf\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.720699 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") pod \"73b1b012-98c9-49cf-852d-a2ff95b746cf\" (UID: \"73b1b012-98c9-49cf-852d-a2ff95b746cf\") " Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.721505 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73b1b012-98c9-49cf-852d-a2ff95b746cf" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.724770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq" (OuterVolumeSpecName: "kube-api-access-8btrq") pod "73b1b012-98c9-49cf-852d-a2ff95b746cf" (UID: "73b1b012-98c9-49cf-852d-a2ff95b746cf"). InnerVolumeSpecName "kube-api-access-8btrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.822371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8btrq\" (UniqueName: \"kubernetes.io/projected/73b1b012-98c9-49cf-852d-a2ff95b746cf-kube-api-access-8btrq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:04 crc kubenswrapper[4820]: I0221 07:10:04.822880 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b1b012-98c9-49cf-852d-a2ff95b746cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214709 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214910 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bcvpx" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.214969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bcvpx" event={"ID":"73b1b012-98c9-49cf-852d-a2ff95b746cf","Type":"ContainerDied","Data":"e23762ffd7ce106b9f82fdb1d0d30eef475c43de4d355359cab19dd81674c400"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.215018 4820 scope.go:117] "RemoveContainer" containerID="8e46bf988fca88e52a41735046800e6ec7c614c220632634b1037bebf8ce17a8" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.216613 4820 generic.go:334] "Generic (PLEG): container finished" podID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerID="200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" exitCode=0 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.216660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerDied","Data":"200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223534 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa49984a-9511-4449-adc6-997899961f73" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" exitCode=0 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223620 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa49984a-9511-4449-adc6-997899961f73","Type":"ContainerDied","Data":"c7e2b7a7c0a492a7d1fe2c8d85d83a8801b3d4fa1ad893af52ea27c7826ffccc"} Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.223733 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.245918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246026 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246346 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246380 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.246415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") pod \"fa49984a-9511-4449-adc6-997899961f73\" (UID: \"fa49984a-9511-4449-adc6-997899961f73\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.247283 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.248110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.248940 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.258654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.259318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info" (OuterVolumeSpecName: "pod-info") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.262118 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.266749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58" (OuterVolumeSpecName: "kube-api-access-cbf58") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "kube-api-access-cbf58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.275183 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.293070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data" (OuterVolumeSpecName: "config-data") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.340026 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf" (OuterVolumeSpecName: "server-conf") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.348958 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.348998 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349009 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349018 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349026 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa49984a-9511-4449-adc6-997899961f73-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349033 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349042 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbf58\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-kube-api-access-cbf58\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349050 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349058 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa49984a-9511-4449-adc6-997899961f73-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.349066 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa49984a-9511-4449-adc6-997899961f73-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.367556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fa49984a-9511-4449-adc6-997899961f73" (UID: "fa49984a-9511-4449-adc6-997899961f73"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.378252 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.395085 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.398375 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.405411 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bcvpx"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.449860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.449931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450067 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") pod \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\" (UID: \"16ebfdb2-72a8-40c6-b0ed-012f138025b2\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450773 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.450786 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa49984a-9511-4449-adc6-997899961f73-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.450867 4820 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.450932 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data podName:8b1242f9-d2ac-493c-bc89-43f7be597a75 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:13.450917542 +0000 UTC m=+1388.484001740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data") pod "rabbitmq-server-0" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75") : configmap "rabbitmq-config-data" not found Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.453064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.453919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts" (OuterVolumeSpecName: "scripts") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.459124 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.461842 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j" (OuterVolumeSpecName: "kube-api-access-gzm7j") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "kube-api-access-gzm7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.462000 4820 scope.go:117] "RemoveContainer" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.475372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data" (OuterVolumeSpecName: "config-data") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.482809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.498586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.505427 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16ebfdb2-72a8-40c6-b0ed-012f138025b2" (UID: "16ebfdb2-72a8-40c6-b0ed-012f138025b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.521563 4820 scope.go:117] "RemoveContainer" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551923 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551952 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551963 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551971 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzm7j\" (UniqueName: \"kubernetes.io/projected/16ebfdb2-72a8-40c6-b0ed-012f138025b2-kube-api-access-gzm7j\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551980 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551988 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.551996 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.552003 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ebfdb2-72a8-40c6-b0ed-012f138025b2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.595144 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603432 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a5b71e95-fe49-48b2-8d7b-575e17855d52/ovn-northd/0.log" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603503 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.603949 4820 scope.go:117] "RemoveContainer" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.604202 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": container with ID starting with 7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078 not found: ID does not exist" containerID="7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604228 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078"} err="failed to get container status \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": rpc error: code = NotFound desc = could not find container \"7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078\": container with ID starting with 7b5af808f2605c9d0157c3e07aab1c1bd94ec4e50f586a3c707f02f81165b078 not found: ID does not exist" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604286 4820 scope.go:117] "RemoveContainer" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.604555 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": container with ID starting with 946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc not found: ID does not exist" containerID="946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc"} err="failed to get container status \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": rpc error: code = NotFound desc = could not find container \"946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc\": container with ID starting with 946d441bcb089d26266af3bf4fbc82af240f58ae7e9f146de897ff5ee9da99dc not found: ID does not exist" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.604834 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663532 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.663720 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") pod \"a5b71e95-fe49-48b2-8d7b-575e17855d52\" (UID: \"a5b71e95-fe49-48b2-8d7b-575e17855d52\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.664473 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.666686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts" (OuterVolumeSpecName: "scripts") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.666699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config" (OuterVolumeSpecName: "config") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.682754 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb" (OuterVolumeSpecName: "kube-api-access-zgjgb") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "kube-api-access-zgjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.706704 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca75969-e299-435a-a607-d470d4ab831e" path="/var/lib/kubelet/pods/0ca75969-e299-435a-a607-d470d4ab831e/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.709492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" path="/var/lib/kubelet/pods/0e16d52c-9322-49cf-9948-8d1c56c0a5ed/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.710133 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa0449e-f842-4605-b814-1e7ede08a5b7" path="/var/lib/kubelet/pods/0fa0449e-f842-4605-b814-1e7ede08a5b7/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.710622 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" path="/var/lib/kubelet/pods/4709782f-54e7-4a78-a56e-8f58a5556501/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.711914 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" path="/var/lib/kubelet/pods/4f99a57a-608b-4678-9be5-abc4347c8bcb/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.712387 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5869267a-13d5-4879-a3b0-d0e12ee57b8c" path="/var/lib/kubelet/pods/5869267a-13d5-4879-a3b0-d0e12ee57b8c/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.712756 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b282c5-1012-4188-bc31-b8e7e794bb77" path="/var/lib/kubelet/pods/67b282c5-1012-4188-bc31-b8e7e794bb77/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.713187 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" path="/var/lib/kubelet/pods/6dbc8f44-c54c-42c0-8430-742c6bb61165/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.714745 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbdfb60-d58f-4949-a33c-f17e9ea2cd05" path="/var/lib/kubelet/pods/6fbdfb60-d58f-4949-a33c-f17e9ea2cd05/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.715229 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" path="/var/lib/kubelet/pods/73b1b012-98c9-49cf-852d-a2ff95b746cf/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.716111 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899bd84b-c67f-4a89-9f92-a68094530566" path="/var/lib/kubelet/pods/899bd84b-c67f-4a89-9f92-a68094530566/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.717874 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c841249-7293-4826-b05f-e4a189aaef07" path="/var/lib/kubelet/pods/8c841249-7293-4826-b05f-e4a189aaef07/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.718438 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95200e0a-ca93-4303-80af-8b950ddc8746" path="/var/lib/kubelet/pods/95200e0a-ca93-4303-80af-8b950ddc8746/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.719206 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" path="/var/lib/kubelet/pods/9a9bb0a5-0caa-4137-b448-a2b55d9be1ff/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.720044 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" path="/var/lib/kubelet/pods/9eb570ff-2a5e-4913-a84f-346579eaa104/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.721539 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" path="/var/lib/kubelet/pods/a112132d-4a29-460c-985d-b0ca2ddb1aa6/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.722215 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" path="/var/lib/kubelet/pods/ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.722846 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed145514-af37-491d-bc62-2f84273b4fd0" path="/var/lib/kubelet/pods/ed145514-af37-491d-bc62-2f84273b4fd0/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.724035 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" path="/var/lib/kubelet/pods/ef3827c2-ee55-4f86-a752-d7cbc9c6454e/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.725189 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa49984a-9511-4449-adc6-997899961f73" path="/var/lib/kubelet/pods/fa49984a-9511-4449-adc6-997899961f73/volumes" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.745334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.747575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765426 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765458 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5b71e95-fe49-48b2-8d7b-575e17855d52-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765467 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjgb\" (UniqueName: \"kubernetes.io/projected/a5b71e95-fe49-48b2-8d7b-575e17855d52-kube-api-access-zgjgb\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765477 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765487 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.765495 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5b71e95-fe49-48b2-8d7b-575e17855d52-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.783596 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a5b71e95-fe49-48b2-8d7b-575e17855d52" (UID: "a5b71e95-fe49-48b2-8d7b-575e17855d52"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.798755 4820 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 21 07:10:05 crc kubenswrapper[4820]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sfpp9" message=< Feb 21 07:10:05 crc kubenswrapper[4820]: Exiting ovn-controller (1) [FAILED] Feb 21 07:10:05 crc kubenswrapper[4820]: Killing ovn-controller (1) [ OK ] Feb 21 07:10:05 crc kubenswrapper[4820]: Killing ovn-controller (1) with SIGKILL [ OK ] Feb 21 07:10:05 crc kubenswrapper[4820]: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > Feb 21 07:10:05 crc kubenswrapper[4820]: E0221 07:10:05.798794 4820 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 21 07:10:05 crc kubenswrapper[4820]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-21T07:09:58Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 21 07:10:05 crc kubenswrapper[4820]: /etc/init.d/functions: line 589: 449 Alarm clock "$@" Feb 21 07:10:05 crc kubenswrapper[4820]: > pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" containerID="cri-o://baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.798831 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sfpp9" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" containerID="cri-o://baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" gracePeriod=22 Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.813471 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866875 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.866951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867367 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5b71e95-fe49-48b2-8d7b-575e17855d52-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.867718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.871073 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.871084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.883014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info" (OuterVolumeSpecName: "pod-info") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.884621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2" (OuterVolumeSpecName: "kube-api-access-k9gg2") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "kube-api-access-k9gg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.893894 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data" (OuterVolumeSpecName: "config-data") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.905182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.954022 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf" (OuterVolumeSpecName: "server-conf") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.968984 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969277 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1242f9-d2ac-493c-bc89-43f7be597a75-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969361 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969440 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969535 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969608 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1242f9-d2ac-493c-bc89-43f7be597a75-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969676 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969811 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gg2\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-kube-api-access-k9gg2\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969914 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.969987 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1242f9-d2ac-493c-bc89-43f7be597a75-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:05 crc kubenswrapper[4820]: I0221 07:10:05.989359 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079104 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") pod \"8b1242f9-d2ac-493c-bc89-43f7be597a75\" (UID: \"8b1242f9-d2ac-493c-bc89-43f7be597a75\") " Feb 21 07:10:06 crc kubenswrapper[4820]: W0221 07:10:06.079905 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8b1242f9-d2ac-493c-bc89-43f7be597a75/volumes/kubernetes.io~projected/rabbitmq-confd Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.079999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8b1242f9-d2ac-493c-bc89-43f7be597a75" (UID: "8b1242f9-d2ac-493c-bc89-43f7be597a75"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.080376 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1242f9-d2ac-493c-bc89-43f7be597a75-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.080466 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234269 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234392 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b1242f9-d2ac-493c-bc89-43f7be597a75","Type":"ContainerDied","Data":"77697e6f65480c0a8c7ecc85d340b2d52d583c5d92b5093accb994850dd6cd98"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.234544 4820 scope.go:117] "RemoveContainer" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.236288 4820 generic.go:334] "Generic (PLEG): container finished" podID="061bac4c-22ff-4144-b114-133ea89494c8" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.236350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerDied","Data":"4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.239281 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerID="8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" exitCode=0 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.239350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241435 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a5b71e95-fe49-48b2-8d7b-575e17855d52/ovn-northd/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241467 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" exitCode=139 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a5b71e95-fe49-48b2-8d7b-575e17855d52","Type":"ContainerDied","Data":"604dd0f90d347bd1d64b0d2191df0d507c4aabc32e0be6179ae2446497d41fb2"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.241578 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243828 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sfpp9_593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/ovn-controller/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243876 4820 generic.go:334] "Generic (PLEG): container finished" podID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerID="baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" exitCode=137 Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerDied","Data":"baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sfpp9" event={"ID":"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7","Type":"ContainerDied","Data":"a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.243988 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93a231ccce463244e328090dedc1dcb1c07884205498f2d63cc04feabadacfe" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.246612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-665c5b9dff-g2t96" event={"ID":"16ebfdb2-72a8-40c6-b0ed-012f138025b2","Type":"ContainerDied","Data":"64f0896a03976792d3631a63a19b92a0be5d44121ab07ab2ac5e458129f71510"} Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.246713 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-665c5b9dff-g2t96" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.256298 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sfpp9_593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/ovn-controller/0.log" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.256373 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.259283 4820 scope.go:117] "RemoveContainer" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.281595 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.302841 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-665c5b9dff-g2t96"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322324 4820 scope.go:117] "RemoveContainer" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.322662 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": container with ID starting with 0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f not found: ID does not exist" containerID="0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322693 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f"} err="failed to get container status \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": rpc error: code = NotFound desc = could not find container \"0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f\": container with ID starting with 0885650c2d278e4fcfc14a19206650cb76d15ede9e04950a5042a453cdc5778f not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322711 4820 scope.go:117] "RemoveContainer" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.322948 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": container with ID starting with b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012 not found: ID does not exist" containerID="b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322965 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012"} err="failed to get container status \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": rpc error: code = NotFound desc = could not find container \"b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012\": container with ID starting with b1cc11ccdfc6de96c0d5049a89c9618d9d92738f6dd13bbee52e728d1ed06012 not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.322976 4820 scope.go:117] "RemoveContainer" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.337900 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.346930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.353861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.375976 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.386543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.387810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.387968 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388016 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388047 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") pod \"6c6905da-351a-426d-a36c-0b05dfa993a9\" (UID: \"6c6905da-351a-426d-a36c-0b05dfa993a9\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388320 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.388381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") pod \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\" (UID: \"593c6a26-a16a-4cf6-8aa9-b20bb6d56da7\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.389338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run" (OuterVolumeSpecName: "var-run") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.390389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.390468 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.391924 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.393017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts" (OuterVolumeSpecName: "scripts") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.393165 4820 scope.go:117] "RemoveContainer" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.395012 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.406581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq" (OuterVolumeSpecName: "kube-api-access-hxmtq") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "kube-api-access-hxmtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.408998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m" (OuterVolumeSpecName: "kube-api-access-pmn4m") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "kube-api-access-pmn4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.424840 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.429208 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.437598 4820 scope.go:117] "RemoveContainer" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.437973 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": container with ID starting with 0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d not found: ID does not exist" containerID="0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438025 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d"} err="failed to get container status \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": rpc error: code = NotFound desc = could not find container \"0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d\": container with ID starting with 0c9b2eff092aa1ebd339d932dc4eab572a874c88efc4c1ba945c1f34356d4d3d not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438060 4820 scope.go:117] "RemoveContainer" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: E0221 07:10:06.438418 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": container with ID starting with 803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30 not found: ID does not exist" containerID="803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438454 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30"} err="failed to get container status \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": rpc error: code = NotFound desc = could not find container \"803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30\": container with ID starting with 803174f356d79877ee28ec9f386203e9a855a9feda9cbff8c1119398f474ce30 not found: ID does not exist" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.438481 4820 scope.go:117] "RemoveContainer" containerID="200807455a2947c5b934674313e4af887e6f6944441305fbe4c73423e4c5c754" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.454009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.484545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "6c6905da-351a-426d-a36c-0b05dfa993a9" (UID: "6c6905da-351a-426d-a36c-0b05dfa993a9"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491353 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491377 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491388 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491399 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491413 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmn4m\" (UniqueName: \"kubernetes.io/projected/6c6905da-351a-426d-a36c-0b05dfa993a9-kube-api-access-pmn4m\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491423 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxmtq\" (UniqueName: \"kubernetes.io/projected/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-kube-api-access-hxmtq\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491431 4820 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6905da-351a-426d-a36c-0b05dfa993a9-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491438 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491446 4820 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491463 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491472 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491483 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491492 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c6905da-351a-426d-a36c-0b05dfa993a9-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.491500 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6905da-351a-426d-a36c-0b05dfa993a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.509031 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.523312 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.559982 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" (UID: "593c6a26-a16a-4cf6-8aa9-b20bb6d56da7"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593284 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") pod \"061bac4c-22ff-4144-b114-133ea89494c8\" (UID: \"061bac4c-22ff-4144-b114-133ea89494c8\") " Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593604 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.593617 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.606014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf" (OuterVolumeSpecName: "kube-api-access-vr5zf") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "kube-api-access-vr5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.619909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data" (OuterVolumeSpecName: "config-data") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.622324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "061bac4c-22ff-4144-b114-133ea89494c8" (UID: "061bac4c-22ff-4144-b114-133ea89494c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694498 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5zf\" (UniqueName: \"kubernetes.io/projected/061bac4c-22ff-4144-b114-133ea89494c8-kube-api-access-vr5zf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694519 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:06 crc kubenswrapper[4820]: I0221 07:10:06.694527 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061bac4c-22ff-4144-b114-133ea89494c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.039887 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.046534 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100714 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") pod \"f42ba382-9e03-4f39-904e-87f4d764175c\" (UID: \"f42ba382-9e03-4f39-904e-87f4d764175c\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.100977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") pod \"0a392f2a-5040-417a-b860-13fa886ea2a2\" (UID: \"0a392f2a-5040-417a-b860-13fa886ea2a2\") " Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.101627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.101839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs" (OuterVolumeSpecName: "logs") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.110971 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.117414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d" (OuterVolumeSpecName: "kube-api-access-29x9d") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "kube-api-access-29x9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.118388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.119325 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts" (OuterVolumeSpecName: "scripts") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.136373 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf" (OuterVolumeSpecName: "kube-api-access-wz8jf") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "kube-api-access-wz8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.142053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.150306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.151553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.175997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data" (OuterVolumeSpecName: "config-data") pod "f42ba382-9e03-4f39-904e-87f4d764175c" (UID: "f42ba382-9e03-4f39-904e-87f4d764175c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.179613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data" (OuterVolumeSpecName: "config-data") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.180122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a392f2a-5040-417a-b860-13fa886ea2a2" (UID: "0a392f2a-5040-417a-b860-13fa886ea2a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213052 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213095 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213105 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz8jf\" (UniqueName: \"kubernetes.io/projected/0a392f2a-5040-417a-b860-13fa886ea2a2-kube-api-access-wz8jf\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213115 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213124 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29x9d\" (UniqueName: \"kubernetes.io/projected/f42ba382-9e03-4f39-904e-87f4d764175c-kube-api-access-29x9d\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213133 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213141 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213149 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213157 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f42ba382-9e03-4f39-904e-87f4d764175c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213166 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213174 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f42ba382-9e03-4f39-904e-87f4d764175c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213181 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a392f2a-5040-417a-b860-13fa886ea2a2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.213189 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a392f2a-5040-417a-b860-13fa886ea2a2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265456 4820 generic.go:334] "Generic (PLEG): container finished" podID="f42ba382-9e03-4f39-904e-87f4d764175c" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" exitCode=0 Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-867cbf55-jx754" event={"ID":"f42ba382-9e03-4f39-904e-87f4d764175c","Type":"ContainerDied","Data":"232aac902ab163c61332ca9251f3b8bd22a0d25dd116a7153f1bb796d475d539"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265544 4820 scope.go:117] "RemoveContainer" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.265630 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-867cbf55-jx754" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.274464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"061bac4c-22ff-4144-b114-133ea89494c8","Type":"ContainerDied","Data":"167b5165b4391c8783b551aad0df3cc918db35e3f8cb50ff81e948ca2a961b4f"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.274553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.286885 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c6905da-351a-426d-a36c-0b05dfa993a9","Type":"ContainerDied","Data":"506d7091e1481dd403657fac413ff300e649bdb874981551b296a055c67d3957"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.287022 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.304486 4820 scope.go:117] "RemoveContainer" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.329485 4820 generic.go:334] "Generic (PLEG): container finished" podID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" exitCode=0 Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.329589 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sfpp9" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330379 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a392f2a-5040-417a-b860-13fa886ea2a2","Type":"ContainerDied","Data":"0c38be7124a920b640712dd690755259fce0c90bcf50290cc80460e97c079adc"} Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.330434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.336303 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.366660 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-867cbf55-jx754"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.385455 4820 scope.go:117] "RemoveContainer" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.398007 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": container with ID starting with 53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5 not found: ID does not exist" containerID="53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.398045 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5"} err="failed to get container status \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": rpc error: code = NotFound desc = could not find container \"53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5\": container with ID starting with 53b06fc8e1dc8470e3afddfe707f843b1f84eeed6dd3461fce33d22013abffe5 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.398068 4820 scope.go:117] "RemoveContainer" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.400536 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": container with ID starting with cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d not found: ID does not exist" containerID="cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.400572 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d"} err="failed to get container status \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": rpc error: code = NotFound desc = could not find container \"cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d\": container with ID starting with cd0d23483e7644cd5a684a29e8131de034c5ec9b2fe4063086dd181b72db9a5d not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.400597 4820 scope.go:117] "RemoveContainer" containerID="4808aa357770c6a9cf05a4f3ee270862ee0dcce52d4403a1e8d52bcbdd8d7b5c" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.407301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.419832 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.433169 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.441036 4820 scope.go:117] "RemoveContainer" containerID="8ea9d572727a93891412c9eefb51f0b89a90a953470d2aea7e3c780c0bab4fc7" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.441164 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.469895 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.479452 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.480402 4820 scope.go:117] "RemoveContainer" containerID="5198d061e257c6bdda5bc9f71cfa5143331f9afe3dc440aebe7e8c90c90675cf" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.494306 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.504686 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sfpp9"] Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.517675 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.106:11211: i/o timeout" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.518931 4820 scope.go:117] "RemoveContainer" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.541958 4820 scope.go:117] "RemoveContainer" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.561412 4820 scope.go:117] "RemoveContainer" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.602557 4820 scope.go:117] "RemoveContainer" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.626664 4820 scope.go:117] "RemoveContainer" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627064 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": container with ID starting with eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d not found: ID does not exist" containerID="eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627092 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d"} err="failed to get container status \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": rpc error: code = NotFound desc = could not find container \"eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d\": container with ID starting with eaec915ec30552334deae6689f286df168816ed373b7cb5dd3b0538630f13f7d not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627113 4820 scope.go:117] "RemoveContainer" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627456 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": container with ID starting with c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208 not found: ID does not exist" containerID="c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627503 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208"} err="failed to get container status \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": rpc error: code = NotFound desc = could not find container \"c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208\": container with ID starting with c00c98b113a1d988816cb972617b0e3c0c1c252eaf3e4695bfed0be2edbd6208 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627536 4820 scope.go:117] "RemoveContainer" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.627874 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": container with ID starting with e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3 not found: ID does not exist" containerID="e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627902 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3"} err="failed to get container status \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": rpc error: code = NotFound desc = could not find container \"e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3\": container with ID starting with e9e3c9b950c7ed7ce3924ae716896c99f8a4b6fd6a20d48e296dd545cf0f24d3 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.627917 4820 scope.go:117] "RemoveContainer" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.628334 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": container with ID starting with 28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6 not found: ID does not exist" containerID="28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.628355 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6"} err="failed to get container status \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": rpc error: code = NotFound desc = could not find container \"28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6\": container with ID starting with 28a0cf01ecd0c5145e3e6d02dd318ff3215eca864659e1ccf69334999aa1fbb6 not found: ID does not exist" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.709374 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061bac4c-22ff-4144-b114-133ea89494c8" path="/var/lib/kubelet/pods/061bac4c-22ff-4144-b114-133ea89494c8/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.709894 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" path="/var/lib/kubelet/pods/0a392f2a-5040-417a-b860-13fa886ea2a2/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.710549 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" path="/var/lib/kubelet/pods/16ebfdb2-72a8-40c6-b0ed-012f138025b2/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.711628 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" path="/var/lib/kubelet/pods/593c6a26-a16a-4cf6-8aa9-b20bb6d56da7/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.712198 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" path="/var/lib/kubelet/pods/6c6905da-351a-426d-a36c-0b05dfa993a9/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.713369 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" path="/var/lib/kubelet/pods/8b1242f9-d2ac-493c-bc89-43f7be597a75/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.713996 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" path="/var/lib/kubelet/pods/a5b71e95-fe49-48b2-8d7b-575e17855d52/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: I0221 07:10:07.714666 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" path="/var/lib/kubelet/pods/f42ba382-9e03-4f39-904e-87f4d764175c/volumes" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.944341 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.944849 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945285 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945318 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.945975 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.947311 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.948363 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:07 crc kubenswrapper[4820]: E0221 07:10:07.948395 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:11 crc kubenswrapper[4820]: E0221 07:10:11.893808 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:11 crc kubenswrapper[4820]: E0221 07:10:11.893896 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:27.893879365 +0000 UTC m=+1402.926963563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.946378 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.947366 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.948771 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949378 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949495 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.949950 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.952568 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:12 crc kubenswrapper[4820]: E0221 07:10:12.952701 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.944077 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.944726 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945100 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945132 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.945772 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.947098 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.948201 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:17 crc kubenswrapper[4820]: E0221 07:10:17.948230 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462008 4820 generic.go:334] "Generic (PLEG): container finished" podID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerID="cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" exitCode=0 Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462097 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117"} Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7796b97765-sqvtc" event={"ID":"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d","Type":"ContainerDied","Data":"11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639"} Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.462645 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c093e11abcb295098b0a4ebd02622476fcadbf35b1cbecc53f2deb5b20c639" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.500840 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.549982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550058 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550119 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.550173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.572144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.575497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc" (OuterVolumeSpecName: "kube-api-access-ktrzc") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "kube-api-access-ktrzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.621335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.623757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.628644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.634065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config" (OuterVolumeSpecName: "config") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.651397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.651890 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") pod \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\" (UID: \"62a9c95e-34e5-49f3-aea4-bbf7f1ae332d\") " Feb 21 07:10:21 crc kubenswrapper[4820]: W0221 07:10:21.652040 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d/volumes/kubernetes.io~secret/ovndb-tls-certs Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652070 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" (UID: "62a9c95e-34e5-49f3-aea4-bbf7f1ae332d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652444 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652456 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652465 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652473 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652481 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652490 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktrzc\" (UniqueName: \"kubernetes.io/projected/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-kube-api-access-ktrzc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:21 crc kubenswrapper[4820]: I0221 07:10:21.652501 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.470871 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7796b97765-sqvtc" Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.491348 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:10:22 crc kubenswrapper[4820]: I0221 07:10:22.498099 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7796b97765-sqvtc"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.944107 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.944814 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945423 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945490 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.945450 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.947353 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.949301 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:22 crc kubenswrapper[4820]: E0221 07:10:22.949343 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:23 crc kubenswrapper[4820]: I0221 07:10:23.706091 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" path="/var/lib/kubelet/pods/62a9c95e-34e5-49f3-aea4-bbf7f1ae332d/volumes" Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.941176 4820 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.941587 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data podName:5916b629-5e69-4ad3-9180-c07181d3ff37 nodeName:}" failed. No retries permitted until 2026-02-21 07:10:59.941568918 +0000 UTC m=+1434.974653106 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data") pod "barbican-keystone-listener-7b6747758b-gs56z" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37") : secret "barbican-config-data" not found Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944085 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944426 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944722 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.944748 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.945124 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.946302 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.947568 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 21 07:10:27 crc kubenswrapper[4820]: E0221 07:10:27.947607 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-rwsk7" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.527493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.528051 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.528954 4820 generic.go:334] "Generic (PLEG): container finished" podID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" exitCode=137 Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.529025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537218 4820 generic.go:334] "Generic (PLEG): container finished" podID="b2200daa-1861-49f4-965a-68417ec65542" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" exitCode=137 Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537271 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537350 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2200daa-1861-49f4-965a-68417ec65542","Type":"ContainerDied","Data":"0365054e0e1b957929429be30908085261342e98138a116476a25078e33fdc0f"} Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.537422 4820 scope.go:117] "RemoveContainer" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.562890 4820 scope.go:117] "RemoveContainer" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.591571 4820 scope.go:117] "RemoveContainer" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.628523 4820 scope.go:117] "RemoveContainer" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.649994 4820 scope.go:117] "RemoveContainer" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.651918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.651960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b2200daa-1861-49f4-965a-68417ec65542\" (UID: \"b2200daa-1861-49f4-965a-68417ec65542\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652683 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache" (OuterVolumeSpecName: "cache") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.652763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock" (OuterVolumeSpecName: "lock") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.653564 4820 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-cache\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.653587 4820 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2200daa-1861-49f4-965a-68417ec65542-lock\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.660882 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc" (OuterVolumeSpecName: "kube-api-access-2pmsc") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "kube-api-access-2pmsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.660937 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.668635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.669581 4820 scope.go:117] "RemoveContainer" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.695876 4820 scope.go:117] "RemoveContainer" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.720164 4820 scope.go:117] "RemoveContainer" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.739939 4820 scope.go:117] "RemoveContainer" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754552 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754591 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.754601 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmsc\" (UniqueName: \"kubernetes.io/projected/b2200daa-1861-49f4-965a-68417ec65542-kube-api-access-2pmsc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.758182 4820 scope.go:117] "RemoveContainer" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.767053 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.780269 4820 scope.go:117] "RemoveContainer" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.797312 4820 scope.go:117] "RemoveContainer" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.810864 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.811827 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.812884 4820 scope.go:117] "RemoveContainer" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.832555 4820 scope.go:117] "RemoveContainer" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.853492 4820 scope.go:117] "RemoveContainer" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.854926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.854957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855005 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") pod \"7880da24-89a6-4428-b9c1-5ffe6647af01\" (UID: \"7880da24-89a6-4428-b9c1-5ffe6647af01\") " Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.855909 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts" (OuterVolumeSpecName: "scripts") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857367 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log" (OuterVolumeSpecName: "var-log") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run" (OuterVolumeSpecName: "var-run") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.857717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib" (OuterVolumeSpecName: "var-lib") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.860656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j" (OuterVolumeSpecName: "kube-api-access-hg85j") pod "7880da24-89a6-4428-b9c1-5ffe6647af01" (UID: "7880da24-89a6-4428-b9c1-5ffe6647af01"). InnerVolumeSpecName "kube-api-access-hg85j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884447 4820 scope.go:117] "RemoveContainer" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.884876 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": container with ID starting with 4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e not found: ID does not exist" containerID="4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884905 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e"} err="failed to get container status \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": rpc error: code = NotFound desc = could not find container \"4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e\": container with ID starting with 4cd64ab5df587e435cc2c060fb12523f479d4494b06c20099d050bf48527d38e not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.884924 4820 scope.go:117] "RemoveContainer" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.885341 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": container with ID starting with 697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf not found: ID does not exist" containerID="697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.885405 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf"} err="failed to get container status \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": rpc error: code = NotFound desc = could not find container \"697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf\": container with ID starting with 697d28d2db4ce4f5b6a133475614969de39e3e2f7fbf85cb3b1ee5c29a3aa5bf not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.885439 4820 scope.go:117] "RemoveContainer" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.886447 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": container with ID starting with adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895 not found: ID does not exist" containerID="adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.886487 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895"} err="failed to get container status \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": rpc error: code = NotFound desc = could not find container \"adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895\": container with ID starting with adea71b78b2d6660c83cc894fdedc64cc9ff5bd6f4b08ec75a579265062c1895 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.886530 4820 scope.go:117] "RemoveContainer" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.887439 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": container with ID starting with b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91 not found: ID does not exist" containerID="b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887476 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91"} err="failed to get container status \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": rpc error: code = NotFound desc = could not find container \"b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91\": container with ID starting with b8becabb6bf27e5daba7a63372c5719c7705c100e59cd0c5962fced806949c91 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887497 4820 scope.go:117] "RemoveContainer" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.887825 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": container with ID starting with 15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02 not found: ID does not exist" containerID="15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887852 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02"} err="failed to get container status \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": rpc error: code = NotFound desc = could not find container \"15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02\": container with ID starting with 15f275a783a52802f701780bc9c21206da8b0a34c9b6bd2e5be2020400f55c02 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.887870 4820 scope.go:117] "RemoveContainer" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888133 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": container with ID starting with 143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d not found: ID does not exist" containerID="143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888163 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d"} err="failed to get container status \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": rpc error: code = NotFound desc = could not find container \"143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d\": container with ID starting with 143e92fc41411f88a018ff5be28d554da07e7cfba0208f13c06b05f70d22e05d not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888181 4820 scope.go:117] "RemoveContainer" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888536 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": container with ID starting with 1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f not found: ID does not exist" containerID="1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888574 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f"} err="failed to get container status \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": rpc error: code = NotFound desc = could not find container \"1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f\": container with ID starting with 1fb33485e2fd621065c6e9f298aa04a8950fe6c56b87116410e1e300293cb07f not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888593 4820 scope.go:117] "RemoveContainer" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.888823 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": container with ID starting with 5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612 not found: ID does not exist" containerID="5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888846 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612"} err="failed to get container status \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": rpc error: code = NotFound desc = could not find container \"5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612\": container with ID starting with 5ef7be739ed8d433753492b2ec077a4b655ed4f37af6ec7706d3b6a03d406612 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.888859 4820 scope.go:117] "RemoveContainer" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889064 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": container with ID starting with c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713 not found: ID does not exist" containerID="c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889085 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713"} err="failed to get container status \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": rpc error: code = NotFound desc = could not find container \"c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713\": container with ID starting with c32a136fa3198a6dcbe8631f054bbcccdd638b4a441ba7dd210df4457211c713 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889098 4820 scope.go:117] "RemoveContainer" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889325 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": container with ID starting with 956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1 not found: ID does not exist" containerID="956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889343 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1"} err="failed to get container status \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": rpc error: code = NotFound desc = could not find container \"956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1\": container with ID starting with 956e24fa1d4da1268c140a87611c19f8d8069101731fa562f1f7d729352deeb1 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889355 4820 scope.go:117] "RemoveContainer" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": container with ID starting with 472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7 not found: ID does not exist" containerID="472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889586 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7"} err="failed to get container status \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": rpc error: code = NotFound desc = could not find container \"472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7\": container with ID starting with 472ba1f1209cf41cc406c274306671c1bd014c6f41903b60908674f9098bc4d7 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889598 4820 scope.go:117] "RemoveContainer" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.889774 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": container with ID starting with 3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3 not found: ID does not exist" containerID="3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3"} err="failed to get container status \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": rpc error: code = NotFound desc = could not find container \"3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3\": container with ID starting with 3eabaf9569ff5b1fa3346a74deecd82375ae204714afbd47bd289a2d91b4cce3 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.889800 4820 scope.go:117] "RemoveContainer" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890002 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": container with ID starting with 6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53 not found: ID does not exist" containerID="6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53"} err="failed to get container status \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": rpc error: code = NotFound desc = could not find container \"6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53\": container with ID starting with 6cbae22d7bd1671cbf5ef7bc09b4c089cd4d745c8fe0a004406917bebc713a53 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890031 4820 scope.go:117] "RemoveContainer" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890258 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": container with ID starting with 8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2 not found: ID does not exist" containerID="8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890276 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2"} err="failed to get container status \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": rpc error: code = NotFound desc = could not find container \"8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2\": container with ID starting with 8b53bdc4f5faef12506655de3f67586dd8c8df6a1fef24b6ea9e80a9f78a66e2 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890292 4820 scope.go:117] "RemoveContainer" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: E0221 07:10:28.890489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": container with ID starting with ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80 not found: ID does not exist" containerID="ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.890507 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80"} err="failed to get container status \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": rpc error: code = NotFound desc = could not find container \"ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80\": container with ID starting with ceb67d6660c010472bc3c94f91cca601a389e557fdb9e6d6ada247c8d664dc80 not found: ID does not exist" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.933574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2200daa-1861-49f4-965a-68417ec65542" (UID: "b2200daa-1861-49f4-965a-68417ec65542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.956990 4820 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957020 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-log\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957030 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2200daa-1861-49f4-965a-68417ec65542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957040 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-run\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957048 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7880da24-89a6-4428-b9c1-5ffe6647af01-var-lib\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957056 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg85j\" (UniqueName: \"kubernetes.io/projected/7880da24-89a6-4428-b9c1-5ffe6647af01-kube-api-access-hg85j\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:28 crc kubenswrapper[4820]: I0221 07:10:28.957066 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7880da24-89a6-4428-b9c1-5ffe6647af01-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.221921 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.227116 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.552149 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rwsk7_7880da24-89a6-4428-b9c1-5ffe6647af01/ovs-vswitchd/0.log" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553201 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rwsk7" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rwsk7" event={"ID":"7880da24-89a6-4428-b9c1-5ffe6647af01","Type":"ContainerDied","Data":"675fc4f5e2aff6c590607c714945d1b90c7e7d3a04e9fbfd0194ea4b92050e93"} Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.553305 4820 scope.go:117] "RemoveContainer" containerID="d879c32dde2136b0ae821d52125217d2f7fc984c04d9a656f69427b318d7b6b6" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.585347 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.588464 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-rwsk7"] Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.706443 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" path="/var/lib/kubelet/pods/7880da24-89a6-4428-b9c1-5ffe6647af01/volumes" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.707483 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2200daa-1861-49f4-965a-68417ec65542" path="/var/lib/kubelet/pods/b2200daa-1861-49f4-965a-68417ec65542/volumes" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.771379 4820 scope.go:117] "RemoveContainer" containerID="355a966c61785d4a47fb8b98c45d3cc6b32b3e30b6504669b7d95030cf31ce73" Feb 21 07:10:29 crc kubenswrapper[4820]: I0221 07:10:29.788228 4820 scope.go:117] "RemoveContainer" containerID="f0e8cd813e640fb93541738f45335efda88900c442e4f6521a72b6bc4a25130d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.361993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.414989 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") pod \"61de836b-112e-4002-80c7-5ab77d4b9069\" (UID: \"61de836b-112e-4002-80c7-5ab77d4b9069\") " Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415167 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs" (OuterVolumeSpecName: "logs") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.415443 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61de836b-112e-4002-80c7-5ab77d4b9069-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.419792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.419990 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl" (OuterVolumeSpecName: "kube-api-access-h77jl") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "kube-api-access-h77jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.442458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.450803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data" (OuterVolumeSpecName: "config-data") pod "61de836b-112e-4002-80c7-5ab77d4b9069" (UID: "61de836b-112e-4002-80c7-5ab77d4b9069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516926 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516971 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516984 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77jl\" (UniqueName: \"kubernetes.io/projected/61de836b-112e-4002-80c7-5ab77d4b9069-kube-api-access-h77jl\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.516998 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61de836b-112e-4002-80c7-5ab77d4b9069-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583789 4820 generic.go:334] "Generic (PLEG): container finished" podID="61de836b-112e-4002-80c7-5ab77d4b9069" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" exitCode=137 Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583838 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67dd4454fc-lr4lq" event={"ID":"61de836b-112e-4002-80c7-5ab77d4b9069","Type":"ContainerDied","Data":"d5ed25326b5133c99c08fd6d1fe6d320a4913920be2b2b8d47571a1f05ab484f"} Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583872 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67dd4454fc-lr4lq" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.583891 4820 scope.go:117] "RemoveContainer" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.613159 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.613671 4820 scope.go:117] "RemoveContainer" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.619175 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-67dd4454fc-lr4lq"] Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630488 4820 scope.go:117] "RemoveContainer" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: E0221 07:10:32.630916 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": container with ID starting with 0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d not found: ID does not exist" containerID="0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630955 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d"} err="failed to get container status \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": rpc error: code = NotFound desc = could not find container \"0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d\": container with ID starting with 0bc1cb874c4b4ae59e91487117b7037236dc22769c454325e2a6e55aed593c4d not found: ID does not exist" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.630986 4820 scope.go:117] "RemoveContainer" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: E0221 07:10:32.631204 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": container with ID starting with 50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35 not found: ID does not exist" containerID="50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35" Feb 21 07:10:32 crc kubenswrapper[4820]: I0221 07:10:32.631229 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35"} err="failed to get container status \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": rpc error: code = NotFound desc = could not find container \"50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35\": container with ID starting with 50f258a2395d305c3f3023bbddc1f867ea4278a5fa2ab04e94611fecb3bcee35 not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.374663 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.428965 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429121 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") pod \"5916b629-5e69-4ad3-9180-c07181d3ff37\" (UID: \"5916b629-5e69-4ad3-9180-c07181d3ff37\") " Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429385 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs" (OuterVolumeSpecName: "logs") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.429694 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5916b629-5e69-4ad3-9180-c07181d3ff37-logs\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.432437 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.432961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6" (OuterVolumeSpecName: "kube-api-access-6r9h6") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "kube-api-access-6r9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.446472 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.466148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data" (OuterVolumeSpecName: "config-data") pod "5916b629-5e69-4ad3-9180-c07181d3ff37" (UID: "5916b629-5e69-4ad3-9180-c07181d3ff37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531359 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9h6\" (UniqueName: \"kubernetes.io/projected/5916b629-5e69-4ad3-9180-c07181d3ff37-kube-api-access-6r9h6\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531392 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531401 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.531409 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5916b629-5e69-4ad3-9180-c07181d3ff37-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596652 4820 generic.go:334] "Generic (PLEG): container finished" podID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" exitCode=137 Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" event={"ID":"5916b629-5e69-4ad3-9180-c07181d3ff37","Type":"ContainerDied","Data":"86fa03fcf82765a136a3aab82794955988ac327e55c1a34182d75c4632f7c8fc"} Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596749 4820 scope.go:117] "RemoveContainer" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.596867 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b6747758b-gs56z" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.622574 4820 scope.go:117] "RemoveContainer" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.624919 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.633936 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7b6747758b-gs56z"] Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641086 4820 scope.go:117] "RemoveContainer" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: E0221 07:10:33.641461 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": container with ID starting with 280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e not found: ID does not exist" containerID="280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641497 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e"} err="failed to get container status \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": rpc error: code = NotFound desc = could not find container \"280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e\": container with ID starting with 280677e63215fe94f6e827818112136350de0e5291daa8abf46a3d0ba9a38b1e not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.641521 4820 scope.go:117] "RemoveContainer" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: E0221 07:10:33.642079 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": container with ID starting with 5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1 not found: ID does not exist" containerID="5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.642103 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1"} err="failed to get container status \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": rpc error: code = NotFound desc = could not find container \"5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1\": container with ID starting with 5297f581f43a4124cef637702bfb29e19268041774ce752f2ad61c5949f4bff1 not found: ID does not exist" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.703779 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" path="/var/lib/kubelet/pods/5916b629-5e69-4ad3-9180-c07181d3ff37/volumes" Feb 21 07:10:33 crc kubenswrapper[4820]: I0221 07:10:33.704378 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" path="/var/lib/kubelet/pods/61de836b-112e-4002-80c7-5ab77d4b9069/volumes" Feb 21 07:10:43 crc kubenswrapper[4820]: I0221 07:10:43.816779 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:10:43 crc kubenswrapper[4820]: I0221 07:10:43.818372 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:13 crc kubenswrapper[4820]: I0221 07:11:13.816927 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:11:13 crc kubenswrapper[4820]: I0221 07:11:13.817879 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.415384 4820 scope.go:117] "RemoveContainer" containerID="b05b0ffeced626b46e5a3d7acf041143c5dda7c4d6e96829cd77f955d68928e3" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.435507 4820 scope.go:117] "RemoveContainer" containerID="25ee57b0b664af1977c29401acb29880d1b373991571fe5848274a63a6cd3a3e" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.464917 4820 scope.go:117] "RemoveContainer" containerID="77ef8fafad5e6b7303c2ab29a54ec70cbb2ea080725bfabd09344c5407b83c16" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.484665 4820 scope.go:117] "RemoveContainer" containerID="3d73b26b5221cdf8b2f3495526d1e7baef6e58d18c45f1b76e76efd304e84f0f" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.507839 4820 scope.go:117] "RemoveContainer" containerID="b1e2e56563934ebad235ed2f0f20504c79930fcb47caf9e4bfbd0d1d3a55fe60" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.533511 4820 scope.go:117] "RemoveContainer" containerID="51679703ae2158b53bc0911e57a3e4d6e461f24e956bb1ea7408f2cb69b87ef1" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.551999 4820 scope.go:117] "RemoveContainer" containerID="ebca1bc305e6cb051db04835594d022509a4dd1726bfbffcfc0b2262d64b6ee2" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.566507 4820 scope.go:117] "RemoveContainer" containerID="3687cb41be17e324f8d8ae7287b8149bf97802e24e08623475454682c9f421e8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.595398 4820 scope.go:117] "RemoveContainer" containerID="ff0159151c6f141c22cffbaa81dad0f0b8a12039ef73dc3cf246a84b8885a789" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.609538 4820 scope.go:117] "RemoveContainer" containerID="baa7cece2ce256578638bf4f6a5bc9638afee7fd94bd34c74a485d35c9ac1293" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.624709 4820 scope.go:117] "RemoveContainer" containerID="bc7f6f9a5d58d38241bb23918ec3d5506b30cfd767c5cd57651093052cf537b1" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.638267 4820 scope.go:117] "RemoveContainer" containerID="bccd056d3ccb7b521fe7131d2adc1ebf924abaee6a5315ab7005a0ebaf022fd8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.652492 4820 scope.go:117] "RemoveContainer" containerID="6217a40428e0542093ddeccb7b2d5a7d3a0d949e486fb5723c5776887db5cdde" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.681087 4820 scope.go:117] "RemoveContainer" containerID="2bf9bc350dca95c1ab5b9b95e84478c10894bc91f944ad95cd208ed56c827df0" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.695198 4820 scope.go:117] "RemoveContainer" containerID="03c548c811acb4c242acaed906047e9cc39adbaca7c520712de29f84928072c8" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.710799 4820 scope.go:117] "RemoveContainer" containerID="a01c8152614e99c3561bbc5b953c4aa156aeb30d7be0dbf08d11fcbf1dfa7fff" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.727422 4820 scope.go:117] "RemoveContainer" containerID="4dd5abb92c8dda3b5eae940d15310c89c1fabe5b33b14d2a4979ab885abf315a" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.747525 4820 scope.go:117] "RemoveContainer" containerID="fefa9ef65a27a95fd0fbfd9f605222ae2b400c17ddf7734534b5e86974696a63" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.767217 4820 scope.go:117] "RemoveContainer" containerID="6e780104fae380320d0ded6249999a3a1b8e347ec62150e353a945acffed1e2c" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.784433 4820 scope.go:117] "RemoveContainer" containerID="9d429a4b3a6200dfae121b729b1359e79321fa7e7717f43e19aff11a7955b313" Feb 21 07:11:14 crc kubenswrapper[4820]: I0221 07:11:14.801784 4820 scope.go:117] "RemoveContainer" containerID="4d5fc8e1fa59379f7fa36b4bb94241f9192d59f0637e2f4694cd6d2809542488" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.403999 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404700 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404711 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404723 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404730 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404737 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404743 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404757 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404773 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404784 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404789 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404802 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404807 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404821 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404829 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404844 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404859 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404864 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404872 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404878 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404885 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404891 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404900 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404905 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404912 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404918 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404934 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404943 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404949 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404959 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404964 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="setup-container" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404974 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404979 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404988 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.404993 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.404999 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405005 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405011 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405016 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405024 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405031 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405040 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405046 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="mysql-bootstrap" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405061 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="mysql-bootstrap" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405071 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405076 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405084 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405090 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405099 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405106 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405114 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405120 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405127 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405133 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405139 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405155 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405161 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405167 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405173 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405181 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405195 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405200 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405213 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405247 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405254 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405261 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405266 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405273 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405279 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405286 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405291 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405300 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405306 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405314 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405320 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405327 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405333 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405340 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405357 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405364 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server-init" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405370 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server-init" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405377 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405382 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405389 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405395 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405402 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405408 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405414 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405419 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405427 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405432 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405441 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405446 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405454 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405460 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405474 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405483 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405490 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405498 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405503 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405513 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405519 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405549 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405556 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405563 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405575 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405581 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405588 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405594 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: E0221 07:11:38.405603 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405609 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405728 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405741 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a9c95e-34e5-49f3-aea4-bbf7f1ae332d" containerName="neutron-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405750 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405758 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405764 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405769 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405780 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-expirer" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405789 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405798 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405806 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa49984a-9511-4449-adc6-997899961f73" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405814 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405820 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405830 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405836 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="swift-recon-cron" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405845 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6905da-351a-426d-a36c-0b05dfa993a9" containerName="galera" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="061bac4c-22ff-4144-b114-133ea89494c8" containerName="nova-cell1-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405861 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-central-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405867 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405874 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca75969-e299-435a-a607-d470d4ab831e" containerName="nova-scheduler-scheduler" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405882 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405888 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61de836b-112e-4002-80c7-5ab77d4b9069" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405895 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1242f9-d2ac-493c-bc89-43f7be597a75" containerName="rabbitmq" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405905 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-updater" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405914 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405922 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c841249-7293-4826-b05f-e4a189aaef07" containerName="nova-cell0-conductor-conductor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405930 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405938 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="593c6a26-a16a-4cf6-8aa9-b20bb6d56da7" containerName="ovn-controller" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405948 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405955 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5916b629-5e69-4ad3-9180-c07181d3ff37" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405961 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405967 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="ovn-northd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405974 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bb0a5-0caa-4137-b448-a2b55d9be1ff" containerName="glance-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405979 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc8f44-c54c-42c0-8430-742c6bb61165" containerName="barbican-keystone-listener" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405988 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="object-replicator" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.405995 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovs-vswitchd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406003 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-reaper" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406012 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a112132d-4a29-460c-985d-b0ca2ddb1aa6" containerName="nova-metadata-metadata" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406019 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb570ff-2a5e-4913-a84f-346579eaa104" containerName="kube-state-metrics" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406024 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="ceilometer-notification-agent" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406031 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7880da24-89a6-4428-b9c1-5ffe6647af01" containerName="ovsdb-server" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406038 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b71e95-fe49-48b2-8d7b-575e17855d52" containerName="openstack-network-exporter" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406047 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406053 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="account-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406061 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4709782f-54e7-4a78-a56e-8f58a5556501" containerName="barbican-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406068 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406076 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ebfdb2-72a8-40c6-b0ed-012f138025b2" containerName="keystone-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406085 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406091 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f99a57a-608b-4678-9be5-abc4347c8bcb" containerName="memcached" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="rsync" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406104 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e0258-f8e3-4e7c-8a4d-aec3ee4d2ffe" containerName="placement-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406114 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2200daa-1861-49f4-965a-68417ec65542" containerName="container-auditor" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406121 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406128 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e16d52c-9322-49cf-9948-8d1c56c0a5ed" containerName="nova-api-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406138 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="sg-core" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406147 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a392f2a-5040-417a-b860-13fa886ea2a2" containerName="proxy-httpd" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406154 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bd84b-c67f-4a89-9f92-a68094530566" containerName="cinder-api" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406160 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42ba382-9e03-4f39-904e-87f4d764175c" containerName="barbican-worker-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406168 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3827c2-ee55-4f86-a752-d7cbc9c6454e" containerName="glance-log" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.406174 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1b012-98c9-49cf-852d-a2ff95b746cf" containerName="mariadb-account-create-update" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.407087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.418775 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553112 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553256 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.553310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.654529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.655051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.655274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.673309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"redhat-operators-g68lp\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:38 crc kubenswrapper[4820]: I0221 07:11:38.726247 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:39 crc kubenswrapper[4820]: I0221 07:11:39.176418 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.161974 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" exitCode=0 Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.162071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd"} Feb 21 07:11:40 crc kubenswrapper[4820]: I0221 07:11:40.170853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"3cca4ef5882796f022029d444d82100b128dbefcd368d40ca8b6e76495cb4966"} Feb 21 07:11:41 crc kubenswrapper[4820]: I0221 07:11:41.181658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} Feb 21 07:11:42 crc kubenswrapper[4820]: I0221 07:11:42.194231 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" exitCode=0 Feb 21 07:11:42 crc kubenswrapper[4820]: I0221 07:11:42.194333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.202650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerStarted","Data":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.223730 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g68lp" podStartSLOduration=2.82425382 podStartE2EDuration="5.223709911s" podCreationTimestamp="2026-02-21 07:11:38 +0000 UTC" firstStartedPulling="2026-02-21 07:11:40.164549423 +0000 UTC m=+1475.197633611" lastFinishedPulling="2026-02-21 07:11:42.564005464 +0000 UTC m=+1477.597089702" observedRunningTime="2026-02-21 07:11:43.219940018 +0000 UTC m=+1478.253024236" watchObservedRunningTime="2026-02-21 07:11:43.223709911 +0000 UTC m=+1478.256794109" Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816485 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816791 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:11:43 crc kubenswrapper[4820]: I0221 07:11:43.816833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:11:44 crc kubenswrapper[4820]: I0221 07:11:44.209002 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:11:44 crc kubenswrapper[4820]: I0221 07:11:44.209079 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" gracePeriod=600 Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.218360 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" exitCode=0 Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.218420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0"} Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.219841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} Feb 21 07:11:45 crc kubenswrapper[4820]: I0221 07:11:45.219868 4820 scope.go:117] "RemoveContainer" containerID="c99eabcd7cdc00f7af4fa074914b442d7ae5de65041a878335f0f81531e57443" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.726643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.727391 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:48 crc kubenswrapper[4820]: I0221 07:11:48.768287 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:49 crc kubenswrapper[4820]: I0221 07:11:49.296475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:49 crc kubenswrapper[4820]: I0221 07:11:49.339797 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:51 crc kubenswrapper[4820]: I0221 07:11:51.273147 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g68lp" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" containerID="cri-o://71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" gracePeriod=2 Feb 21 07:11:52 crc kubenswrapper[4820]: I0221 07:11:52.934776 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.058860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.058946 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.059031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") pod \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\" (UID: \"72ced9f9-32a3-46c4-b5a9-7c6d394bd164\") " Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.060015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities" (OuterVolumeSpecName: "utilities") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.065138 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7" (OuterVolumeSpecName: "kube-api-access-9j4m7") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "kube-api-access-9j4m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.160416 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4m7\" (UniqueName: \"kubernetes.io/projected/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-kube-api-access-9j4m7\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.160461 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.202517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72ced9f9-32a3-46c4-b5a9-7c6d394bd164" (UID: "72ced9f9-32a3-46c4-b5a9-7c6d394bd164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.262361 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ced9f9-32a3-46c4-b5a9-7c6d394bd164-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289568 4820 generic.go:334] "Generic (PLEG): container finished" podID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" exitCode=0 Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289638 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g68lp" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289659 4820 scope.go:117] "RemoveContainer" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.289646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g68lp" event={"ID":"72ced9f9-32a3-46c4-b5a9-7c6d394bd164","Type":"ContainerDied","Data":"3cca4ef5882796f022029d444d82100b128dbefcd368d40ca8b6e76495cb4966"} Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.322329 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.325214 4820 scope.go:117] "RemoveContainer" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.326158 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g68lp"] Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.346521 4820 scope.go:117] "RemoveContainer" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.369327 4820 scope.go:117] "RemoveContainer" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370035 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": container with ID starting with 71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98 not found: ID does not exist" containerID="71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370073 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98"} err="failed to get container status \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": rpc error: code = NotFound desc = could not find container \"71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98\": container with ID starting with 71a3fa5e307b13d58b3e920533943375a712556f22f793f77d47102d0a3e4d98 not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370097 4820 scope.go:117] "RemoveContainer" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370581 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": container with ID starting with eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429 not found: ID does not exist" containerID="eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370615 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429"} err="failed to get container status \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": rpc error: code = NotFound desc = could not find container \"eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429\": container with ID starting with eaaba092c267768906c9661e4a130943061bd17fc01edbc50bc42d4242f52429 not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370643 4820 scope.go:117] "RemoveContainer" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: E0221 07:11:53.370947 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": container with ID starting with a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd not found: ID does not exist" containerID="a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.370971 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd"} err="failed to get container status \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": rpc error: code = NotFound desc = could not find container \"a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd\": container with ID starting with a56278c2394377560c656f2bfa90333b16b4909767d1d1519e2105b8c59367bd not found: ID does not exist" Feb 21 07:11:53 crc kubenswrapper[4820]: I0221 07:11:53.713641 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" path="/var/lib/kubelet/pods/72ced9f9-32a3-46c4-b5a9-7c6d394bd164/volumes" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.118277 4820 scope.go:117] "RemoveContainer" containerID="52db6acc38ff2a23c299765955438b0540a4c5ba1d62d6356d26d0d4454620b3" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.163920 4820 scope.go:117] "RemoveContainer" containerID="d5d4ebfd3d862ab82dd24efdb0236db9cf326c55f3fab0e5ba28750a426c7f68" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.183513 4820 scope.go:117] "RemoveContainer" containerID="e889c593ed0d71d0bd8a837d661899903d747301909f78ed5da991ce6eccf229" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.212650 4820 scope.go:117] "RemoveContainer" containerID="550c85937cab1a43ffca5a3e6f730da87ca2ca354c9ca4640bf21a06db239cf3" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.242565 4820 scope.go:117] "RemoveContainer" containerID="ced644e0ce17e36b8fc26dcef8bef247a0ca698d43783b8feefdf41c4c74cc3d" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.265668 4820 scope.go:117] "RemoveContainer" containerID="902a90534639057fe4891bc5ba6d70d20ddb57a4bac2175eb285eb30ef1ad8ea" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.302221 4820 scope.go:117] "RemoveContainer" containerID="2906e8fbc9b8391ea1b9f7b50ccdd20d9a364edc7038390a746c5010002fe445" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.321072 4820 scope.go:117] "RemoveContainer" containerID="84344b3d5ae53a06ac9828132a33cafdbcfdeafdabeded21cd72b5eb2ec97792" Feb 21 07:12:15 crc kubenswrapper[4820]: I0221 07:12:15.337311 4820 scope.go:117] "RemoveContainer" containerID="1f8b1fb2f69da036c688f31fb3679ae1f19a1bae47b10780c72a6f4de62dcb8b" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.851565 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852314 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-utilities" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852332 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-utilities" Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852353 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: E0221 07:12:19.852372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-content" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852380 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="extract-content" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.852546 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ced9f9-32a3-46c4-b5a9-7c6d394bd164" containerName="registry-server" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.853785 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:19 crc kubenswrapper[4820]: I0221 07:12:19.861097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.016797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117918 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.117969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.118446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.118472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.138466 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"community-operators-l2wrp\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.175044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:20 crc kubenswrapper[4820]: I0221 07:12:20.616391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499273 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" exitCode=0 Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30"} Feb 21 07:12:21 crc kubenswrapper[4820]: I0221 07:12:21.499585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerStarted","Data":"53fc57866f63f70098d655c5a5614087b69ee94673e7a6a34fd55d921072b114"} Feb 21 07:12:22 crc kubenswrapper[4820]: I0221 07:12:22.511376 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" exitCode=0 Feb 21 07:12:22 crc kubenswrapper[4820]: I0221 07:12:22.511476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb"} Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.348466 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.350287 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.363973 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.469854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.469978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.470015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.520755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerStarted","Data":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.540715 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2wrp" podStartSLOduration=3.115931908 podStartE2EDuration="4.540694678s" podCreationTimestamp="2026-02-21 07:12:19 +0000 UTC" firstStartedPulling="2026-02-21 07:12:21.501057704 +0000 UTC m=+1516.534141902" lastFinishedPulling="2026-02-21 07:12:22.925820474 +0000 UTC m=+1517.958904672" observedRunningTime="2026-02-21 07:12:23.535863096 +0000 UTC m=+1518.568947294" watchObservedRunningTime="2026-02-21 07:12:23.540694678 +0000 UTC m=+1518.573778866" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.571618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.590771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"redhat-marketplace-zxhsj\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:23 crc kubenswrapper[4820]: I0221 07:12:23.669263 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.118033 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:24 crc kubenswrapper[4820]: W0221 07:12:24.121620 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1a58f68_a763_4319_a105_a195c741011f.slice/crio-8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac WatchSource:0}: Error finding container 8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac: Status 404 returned error can't find the container with id 8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530348 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" exitCode=0 Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11"} Feb 21 07:12:24 crc kubenswrapper[4820]: I0221 07:12:24.530459 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerStarted","Data":"8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac"} Feb 21 07:12:25 crc kubenswrapper[4820]: I0221 07:12:25.562176 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" exitCode=0 Feb 21 07:12:25 crc kubenswrapper[4820]: I0221 07:12:25.562457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92"} Feb 21 07:12:26 crc kubenswrapper[4820]: I0221 07:12:26.572569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerStarted","Data":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} Feb 21 07:12:26 crc kubenswrapper[4820]: I0221 07:12:26.601933 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zxhsj" podStartSLOduration=2.172310301 podStartE2EDuration="3.601918152s" podCreationTimestamp="2026-02-21 07:12:23 +0000 UTC" firstStartedPulling="2026-02-21 07:12:24.532013953 +0000 UTC m=+1519.565098151" lastFinishedPulling="2026-02-21 07:12:25.961621804 +0000 UTC m=+1520.994706002" observedRunningTime="2026-02-21 07:12:26.599315571 +0000 UTC m=+1521.632399869" watchObservedRunningTime="2026-02-21 07:12:26.601918152 +0000 UTC m=+1521.635002340" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.175204 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.175690 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.221634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.658224 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:30 crc kubenswrapper[4820]: I0221 07:12:30.722990 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:32 crc kubenswrapper[4820]: I0221 07:12:32.613045 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2wrp" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" containerID="cri-o://fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" gracePeriod=2 Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.053169 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.207866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.208184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.208212 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") pod \"de999a72-1e7e-461a-a907-c24875dba879\" (UID: \"de999a72-1e7e-461a-a907-c24875dba879\") " Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.209056 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities" (OuterVolumeSpecName: "utilities") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.214426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj" (OuterVolumeSpecName: "kube-api-access-kvnwj") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "kube-api-access-kvnwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.311731 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.311769 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnwj\" (UniqueName: \"kubernetes.io/projected/de999a72-1e7e-461a-a907-c24875dba879-kube-api-access-kvnwj\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.443276 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de999a72-1e7e-461a-a907-c24875dba879" (UID: "de999a72-1e7e-461a-a907-c24875dba879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.514500 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de999a72-1e7e-461a-a907-c24875dba879-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622719 4820 generic.go:334] "Generic (PLEG): container finished" podID="de999a72-1e7e-461a-a907-c24875dba879" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" exitCode=0 Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2wrp" event={"ID":"de999a72-1e7e-461a-a907-c24875dba879","Type":"ContainerDied","Data":"53fc57866f63f70098d655c5a5614087b69ee94673e7a6a34fd55d921072b114"} Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622806 4820 scope.go:117] "RemoveContainer" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.622834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2wrp" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.642741 4820 scope.go:117] "RemoveContainer" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.667341 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.668534 4820 scope.go:117] "RemoveContainer" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.670852 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.670945 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.683158 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2wrp"] Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.694664 4820 scope.go:117] "RemoveContainer" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.695120 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": container with ID starting with fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf not found: ID does not exist" containerID="fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695152 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf"} err="failed to get container status \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": rpc error: code = NotFound desc = could not find container \"fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf\": container with ID starting with fadc91457f58dacc277f9903505436f6d74bcb26cf5e3883f1ef9dd01efa70bf not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695172 4820 scope.go:117] "RemoveContainer" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.695664 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": container with ID starting with 4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb not found: ID does not exist" containerID="4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695795 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb"} err="failed to get container status \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": rpc error: code = NotFound desc = could not find container \"4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb\": container with ID starting with 4e1278bb0bc9addd1a6d09875022861ed33947b10c1f0996268ea690f05bdbdb not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.695890 4820 scope.go:117] "RemoveContainer" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: E0221 07:12:33.697723 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": container with ID starting with f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30 not found: ID does not exist" containerID="f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.697802 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30"} err="failed to get container status \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": rpc error: code = NotFound desc = could not find container \"f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30\": container with ID starting with f74b8c420eb80a11033d43817e8e803660b5720b1d836bf3e472ce2b65330f30 not found: ID does not exist" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.705280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de999a72-1e7e-461a-a907-c24875dba879" path="/var/lib/kubelet/pods/de999a72-1e7e-461a-a907-c24875dba879/volumes" Feb 21 07:12:33 crc kubenswrapper[4820]: I0221 07:12:33.736817 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:34 crc kubenswrapper[4820]: I0221 07:12:34.679551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:35 crc kubenswrapper[4820]: I0221 07:12:35.851525 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:37 crc kubenswrapper[4820]: I0221 07:12:37.658532 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zxhsj" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" containerID="cri-o://6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" gracePeriod=2 Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.069455 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.184262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") pod \"e1a58f68-a763-4319-a105-a195c741011f\" (UID: \"e1a58f68-a763-4319-a105-a195c741011f\") " Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.185732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities" (OuterVolumeSpecName: "utilities") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.186133 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.194743 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t" (OuterVolumeSpecName: "kube-api-access-28h2t") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "kube-api-access-28h2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.213372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a58f68-a763-4319-a105-a195c741011f" (UID: "e1a58f68-a763-4319-a105-a195c741011f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.287760 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28h2t\" (UniqueName: \"kubernetes.io/projected/e1a58f68-a763-4319-a105-a195c741011f-kube-api-access-28h2t\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.287793 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a58f68-a763-4319-a105-a195c741011f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669819 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1a58f68-a763-4319-a105-a195c741011f" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" exitCode=0 Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669877 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxhsj" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.669893 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.670306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxhsj" event={"ID":"e1a58f68-a763-4319-a105-a195c741011f","Type":"ContainerDied","Data":"8fa1d44bbe904a390c5f31829c12242548b91bd0d050fccae0ca69068d71e9ac"} Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.670347 4820 scope.go:117] "RemoveContainer" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.688957 4820 scope.go:117] "RemoveContainer" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.704797 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.712349 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxhsj"] Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.724252 4820 scope.go:117] "RemoveContainer" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.742795 4820 scope.go:117] "RemoveContainer" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.743199 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": container with ID starting with 6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3 not found: ID does not exist" containerID="6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743314 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3"} err="failed to get container status \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": rpc error: code = NotFound desc = could not find container \"6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3\": container with ID starting with 6d9a2f0b464534c09ce6b1f3d06a7d3e16c8218e641c817665ea42cd268539f3 not found: ID does not exist" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743397 4820 scope.go:117] "RemoveContainer" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.743714 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": container with ID starting with 65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92 not found: ID does not exist" containerID="65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743794 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92"} err="failed to get container status \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": rpc error: code = NotFound desc = could not find container \"65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92\": container with ID starting with 65f3025679bfb60192c6acb0c8e1e786e8e7c591af1ada34bf26c26513901a92 not found: ID does not exist" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.743858 4820 scope.go:117] "RemoveContainer" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: E0221 07:12:38.744132 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": container with ID starting with e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11 not found: ID does not exist" containerID="e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11" Feb 21 07:12:38 crc kubenswrapper[4820]: I0221 07:12:38.744202 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11"} err="failed to get container status \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": rpc error: code = NotFound desc = could not find container \"e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11\": container with ID starting with e277a2c9757b1188d8f1bd54ee2acdb33a0ed6f6ded29b9be24a5010073c3f11 not found: ID does not exist" Feb 21 07:12:39 crc kubenswrapper[4820]: I0221 07:12:39.719785 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a58f68-a763-4319-a105-a195c741011f" path="/var/lib/kubelet/pods/e1a58f68-a763-4319-a105-a195c741011f/volumes" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.271489 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272423 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272441 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272465 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272480 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272488 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272508 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272596 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272622 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="extract-utilities" Feb 21 07:13:15 crc kubenswrapper[4820]: E0221 07:13:15.272636 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272643 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="extract-content" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272795 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a58f68-a763-4319-a105-a195c741011f" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.272810 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de999a72-1e7e-461a-a907-c24875dba879" containerName="registry-server" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.273707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.282984 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.416602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.492693 4820 scope.go:117] "RemoveContainer" containerID="54118e9818d7276160841e63d567ac3e54c21ac7cf2b86b070a7bea2245976ec" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518232 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.518890 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.519212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.528718 4820 scope.go:117] "RemoveContainer" containerID="f3324889fec35626b75b20c53e1108c5e3bcfec60c0afc870568283a3900d80f" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.542430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"certified-operators-z8459\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.582293 4820 scope.go:117] "RemoveContainer" containerID="0bec83aee0f9a29a60415108651d81b24d0de435829325f2cb93c8d2a1d9ae61" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.604468 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.604756 4820 scope.go:117] "RemoveContainer" containerID="8de9677e20a8b782d2bcecb9fa76424556258bd3e583a5de8910cd040771e0ad" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.653449 4820 scope.go:117] "RemoveContainer" containerID="fdbb90e329836ac7456cf06344114203e75f7f1a57280874e8b064833b913f8e" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.686558 4820 scope.go:117] "RemoveContainer" containerID="eafd72d9e7eb9455c63fe46ce3b813c939d82e75512da868bf318e1592ef0443" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.730090 4820 scope.go:117] "RemoveContainer" containerID="c6eec58d937060e917865b55d6939557fd730b3dc3294db9f26e433da11bcf3a" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.758153 4820 scope.go:117] "RemoveContainer" containerID="bae2eaf1b1365374df39b8e13452ae986ea6ebeb55baae9a5ee7d5811ab1d647" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.785387 4820 scope.go:117] "RemoveContainer" containerID="2888304fe149a4652cef0ecaece438bfd7d58f18a6fbf5e65f2e3c959991183b" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.822601 4820 scope.go:117] "RemoveContainer" containerID="826aef72e76fbab81ee8a9700d6ed1f07cc109d2629349f71b59a9573befe3d1" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.854916 4820 scope.go:117] "RemoveContainer" containerID="cbde025c9fa7d22d168b54e6b8a411d4937140bd66d43a2f8ef9982aa91aa117" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.889141 4820 scope.go:117] "RemoveContainer" containerID="c89955e8456635f9567d07ebef7a2fae175b713a07f50ea3684f6959998a79da" Feb 21 07:13:15 crc kubenswrapper[4820]: I0221 07:13:15.917581 4820 scope.go:117] "RemoveContainer" containerID="89a677ab22f4bcd7551d19abb1edd151c1367901214a3d624d55bc1c5a3aa903" Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.095501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983218 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" exitCode=0 Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf"} Feb 21 07:13:16 crc kubenswrapper[4820]: I0221 07:13:16.983762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerStarted","Data":"8d55eafc614b0a6bbfb5f893449a921ca315613e72a61449dede6af0b0e34777"} Feb 21 07:13:19 crc kubenswrapper[4820]: I0221 07:13:19.002660 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" exitCode=0 Feb 21 07:13:19 crc kubenswrapper[4820]: I0221 07:13:19.003144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12"} Feb 21 07:13:20 crc kubenswrapper[4820]: I0221 07:13:20.013702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerStarted","Data":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} Feb 21 07:13:20 crc kubenswrapper[4820]: I0221 07:13:20.038637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8459" podStartSLOduration=2.580394691 podStartE2EDuration="5.038610543s" podCreationTimestamp="2026-02-21 07:13:15 +0000 UTC" firstStartedPulling="2026-02-21 07:13:16.985692176 +0000 UTC m=+1572.018776384" lastFinishedPulling="2026-02-21 07:13:19.443908038 +0000 UTC m=+1574.476992236" observedRunningTime="2026-02-21 07:13:20.033568296 +0000 UTC m=+1575.066652524" watchObservedRunningTime="2026-02-21 07:13:20.038610543 +0000 UTC m=+1575.071694781" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.605589 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.606351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:25 crc kubenswrapper[4820]: I0221 07:13:25.657183 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:26 crc kubenswrapper[4820]: I0221 07:13:26.089201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:26 crc kubenswrapper[4820]: I0221 07:13:26.137082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.064800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8459" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" containerID="cri-o://e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" gracePeriod=2 Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.478940 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.613876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") pod \"5f95139f-3378-4e78-b252-d5c8675b569d\" (UID: \"5f95139f-3378-4e78-b252-d5c8675b569d\") " Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.615102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities" (OuterVolumeSpecName: "utilities") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.623734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878" (OuterVolumeSpecName: "kube-api-access-t4878") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "kube-api-access-t4878". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.715928 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4878\" (UniqueName: \"kubernetes.io/projected/5f95139f-3378-4e78-b252-d5c8675b569d-kube-api-access-t4878\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:28 crc kubenswrapper[4820]: I0221 07:13:28.715987 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081400 4820 generic.go:334] "Generic (PLEG): container finished" podID="5f95139f-3378-4e78-b252-d5c8675b569d" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" exitCode=0 Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081475 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8459" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8459" event={"ID":"5f95139f-3378-4e78-b252-d5c8675b569d","Type":"ContainerDied","Data":"8d55eafc614b0a6bbfb5f893449a921ca315613e72a61449dede6af0b0e34777"} Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.081535 4820 scope.go:117] "RemoveContainer" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.110785 4820 scope.go:117] "RemoveContainer" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.153021 4820 scope.go:117] "RemoveContainer" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.159797 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f95139f-3378-4e78-b252-d5c8675b569d" (UID: "5f95139f-3378-4e78-b252-d5c8675b569d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172268 4820 scope.go:117] "RemoveContainer" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.172736 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": container with ID starting with e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550 not found: ID does not exist" containerID="e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172771 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550"} err="failed to get container status \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": rpc error: code = NotFound desc = could not find container \"e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550\": container with ID starting with e175828efc103899f147fb7b27944173ca7bdf6efbfd4727c68c15a8e623f550 not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.172793 4820 scope.go:117] "RemoveContainer" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.173017 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": container with ID starting with e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12 not found: ID does not exist" containerID="e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173064 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12"} err="failed to get container status \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": rpc error: code = NotFound desc = could not find container \"e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12\": container with ID starting with e15adeef0ff56585aed5ac5a3232ed64c88cc2daec3aba7bcab11f6fde4e3f12 not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173095 4820 scope.go:117] "RemoveContainer" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: E0221 07:13:29.173413 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": container with ID starting with ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf not found: ID does not exist" containerID="ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.173437 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf"} err="failed to get container status \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": rpc error: code = NotFound desc = could not find container \"ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf\": container with ID starting with ba00461668736d349b88732e056be565c6d230d5733e261c22ce32d93b23b0cf not found: ID does not exist" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.228349 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f95139f-3378-4e78-b252-d5c8675b569d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.426177 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.432812 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8459"] Feb 21 07:13:29 crc kubenswrapper[4820]: I0221 07:13:29.708848 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" path="/var/lib/kubelet/pods/5f95139f-3378-4e78-b252-d5c8675b569d/volumes" Feb 21 07:14:13 crc kubenswrapper[4820]: I0221 07:14:13.816841 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:14:13 crc kubenswrapper[4820]: I0221 07:14:13.817389 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.111073 4820 scope.go:117] "RemoveContainer" containerID="24941eaa5fcba668b44518933915d73aa568096044e3c4ed1b1d3b36fe63bafd" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.143492 4820 scope.go:117] "RemoveContainer" containerID="41d8a8ccd5e19ac57e720c85ad185f48f7da5235d29f9404d9f0a52202561714" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.192187 4820 scope.go:117] "RemoveContainer" containerID="8f1053354930657be13a47d1867923e155692b07e230c8c0cef421265cc3f890" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.217480 4820 scope.go:117] "RemoveContainer" containerID="498df7f52db5016d1ea471a40a54c53253220d0dedd0b2737e1896b8a9a9f7ae" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.238209 4820 scope.go:117] "RemoveContainer" containerID="9f7f20d400dd7826ec45e2cb589dc07ed34aae16fbcb9165c10870bcc6f36e39" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.253838 4820 scope.go:117] "RemoveContainer" containerID="e51a0c40d4d4f93896ed1ad8bb07fb842ed12a2ac2a6f114e30bfa929e0c2882" Feb 21 07:14:16 crc kubenswrapper[4820]: I0221 07:14:16.291053 4820 scope.go:117] "RemoveContainer" containerID="f7fd77b014ee72eca0be4a4c777ce16b6927f8e4f122356935b98249924cfad2" Feb 21 07:14:43 crc kubenswrapper[4820]: I0221 07:14:43.816434 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:14:43 crc kubenswrapper[4820]: I0221 07:14:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.144661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-utilities" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-utilities" Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145848 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-content" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145856 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="extract-content" Feb 21 07:15:00 crc kubenswrapper[4820]: E0221 07:15:00.145881 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.145889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.146052 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f95139f-3378-4e78-b252-d5c8675b569d" containerName="registry-server" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.146581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.148454 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.148455 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.156369 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.157779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259180 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.259227 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.260398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.266532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.275961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"collect-profiles-29527635-gddrt\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.467233 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:00 crc kubenswrapper[4820]: I0221 07:15:00.876365 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771496 4820 generic.go:334] "Generic (PLEG): container finished" podID="ebbbeb29-093d-424c-aa21-a711f564f201" containerID="a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0" exitCode=0 Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerDied","Data":"a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0"} Feb 21 07:15:01 crc kubenswrapper[4820]: I0221 07:15:01.771662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerStarted","Data":"91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f"} Feb 21 07:15:02 crc kubenswrapper[4820]: I0221 07:15:02.993037 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099809 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.099879 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") pod \"ebbbeb29-093d-424c-aa21-a711f564f201\" (UID: \"ebbbeb29-093d-424c-aa21-a711f564f201\") " Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.100770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.107679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.124463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp" (OuterVolumeSpecName: "kube-api-access-wl9wp") pod "ebbbeb29-093d-424c-aa21-a711f564f201" (UID: "ebbbeb29-093d-424c-aa21-a711f564f201"). InnerVolumeSpecName "kube-api-access-wl9wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201471 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebbbeb29-093d-424c-aa21-a711f564f201-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201521 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl9wp\" (UniqueName: \"kubernetes.io/projected/ebbbeb29-093d-424c-aa21-a711f564f201-kube-api-access-wl9wp\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.201539 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebbbeb29-093d-424c-aa21-a711f564f201-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.785986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" event={"ID":"ebbbeb29-093d-424c-aa21-a711f564f201","Type":"ContainerDied","Data":"91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f"} Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.786313 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b11a148d927e25a5d57756e195a1d73d78980db20620c1818237ad4e45751f" Feb 21 07:15:03 crc kubenswrapper[4820]: I0221 07:15:03.786036 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.815880 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.816414 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.816456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.817112 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:15:13 crc kubenswrapper[4820]: I0221 07:15:13.817169 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" gracePeriod=600 Feb 21 07:15:13 crc kubenswrapper[4820]: E0221 07:15:13.951440 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864325 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" exitCode=0 Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4"} Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.864708 4820 scope.go:117] "RemoveContainer" containerID="382dbabbc108418e0159c4f962ec6351f7f55d31b6d9ca634247ee411e9ee6e0" Feb 21 07:15:14 crc kubenswrapper[4820]: I0221 07:15:14.865119 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:14 crc kubenswrapper[4820]: E0221 07:15:14.865381 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.430327 4820 scope.go:117] "RemoveContainer" containerID="23c184a5e245f5facd743c3a7e6bea11c07b828a4d25451cb2550eaa44349110" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.454202 4820 scope.go:117] "RemoveContainer" containerID="841b7a62d1e6b92cb6679a13f353ab7adf29630b1c91e4ad2d0c98c9562682d7" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.472028 4820 scope.go:117] "RemoveContainer" containerID="ab7e68ddc2356c6ae5d0b5f7f63da545c73754b32e149e02621025d7c3d10d36" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.532944 4820 scope.go:117] "RemoveContainer" containerID="21769d7e4b9a4ff09d20e68b3668dbde7c57ce716fc232f4365f9370127b9d52" Feb 21 07:15:16 crc kubenswrapper[4820]: I0221 07:15:16.562774 4820 scope.go:117] "RemoveContainer" containerID="4cf28ea16018fb755adbd8f5f3ce5ec56799e0bc139946346840132dd9f3b8c1" Feb 21 07:15:27 crc kubenswrapper[4820]: I0221 07:15:27.696274 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:27 crc kubenswrapper[4820]: E0221 07:15:27.697125 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:41 crc kubenswrapper[4820]: I0221 07:15:41.697508 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:41 crc kubenswrapper[4820]: E0221 07:15:41.698347 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:15:54 crc kubenswrapper[4820]: I0221 07:15:54.696426 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:15:54 crc kubenswrapper[4820]: E0221 07:15:54.697442 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:08 crc kubenswrapper[4820]: I0221 07:16:08.696430 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:08 crc kubenswrapper[4820]: E0221 07:16:08.697129 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:20 crc kubenswrapper[4820]: I0221 07:16:20.696877 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:20 crc kubenswrapper[4820]: E0221 07:16:20.697964 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:33 crc kubenswrapper[4820]: I0221 07:16:33.697432 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:33 crc kubenswrapper[4820]: E0221 07:16:33.697848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:45 crc kubenswrapper[4820]: I0221 07:16:45.704887 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:45 crc kubenswrapper[4820]: E0221 07:16:45.705564 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:16:57 crc kubenswrapper[4820]: I0221 07:16:57.696943 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:16:57 crc kubenswrapper[4820]: E0221 07:16:57.697380 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:08 crc kubenswrapper[4820]: I0221 07:17:08.696833 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:08 crc kubenswrapper[4820]: E0221 07:17:08.697835 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:21 crc kubenswrapper[4820]: I0221 07:17:21.697224 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:21 crc kubenswrapper[4820]: E0221 07:17:21.698101 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:35 crc kubenswrapper[4820]: I0221 07:17:35.702560 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:35 crc kubenswrapper[4820]: E0221 07:17:35.703555 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:17:47 crc kubenswrapper[4820]: I0221 07:17:47.696910 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:17:47 crc kubenswrapper[4820]: E0221 07:17:47.699571 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:01 crc kubenswrapper[4820]: I0221 07:18:01.697227 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:01 crc kubenswrapper[4820]: E0221 07:18:01.697904 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:15 crc kubenswrapper[4820]: I0221 07:18:15.707472 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:15 crc kubenswrapper[4820]: E0221 07:18:15.709062 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:27 crc kubenswrapper[4820]: I0221 07:18:27.697253 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:27 crc kubenswrapper[4820]: E0221 07:18:27.698049 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:39 crc kubenswrapper[4820]: I0221 07:18:39.697102 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:39 crc kubenswrapper[4820]: E0221 07:18:39.697864 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:18:52 crc kubenswrapper[4820]: I0221 07:18:52.696900 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:18:52 crc kubenswrapper[4820]: E0221 07:18:52.697724 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:06 crc kubenswrapper[4820]: I0221 07:19:06.696641 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:06 crc kubenswrapper[4820]: E0221 07:19:06.697363 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:17 crc kubenswrapper[4820]: I0221 07:19:17.697452 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:17 crc kubenswrapper[4820]: E0221 07:19:17.698349 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:30 crc kubenswrapper[4820]: I0221 07:19:30.697093 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:30 crc kubenswrapper[4820]: E0221 07:19:30.697884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:41 crc kubenswrapper[4820]: I0221 07:19:41.697473 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:41 crc kubenswrapper[4820]: E0221 07:19:41.699339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:19:55 crc kubenswrapper[4820]: I0221 07:19:55.701340 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:19:55 crc kubenswrapper[4820]: E0221 07:19:55.702162 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:20:06 crc kubenswrapper[4820]: I0221 07:20:06.696845 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:20:06 crc kubenswrapper[4820]: E0221 07:20:06.697645 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:20:19 crc kubenswrapper[4820]: I0221 07:20:19.697725 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:20:20 crc kubenswrapper[4820]: I0221 07:20:20.279923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.917853 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:36 crc kubenswrapper[4820]: E0221 07:22:36.919816 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.919836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.920025 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" containerName="collect-profiles" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.921194 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:36 crc kubenswrapper[4820]: I0221 07:22:36.930871 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.050836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.051054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.051274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.153974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.175455 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"redhat-marketplace-cdlsm\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.277657 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:37 crc kubenswrapper[4820]: I0221 07:22:37.739707 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.425448 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" exitCode=0 Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.427649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6"} Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.427720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"ce1a9f67f3249b9da77b6bfcf849a1f251744eca44d09eed846c3de212c14f17"} Feb 21 07:22:38 crc kubenswrapper[4820]: I0221 07:22:38.428638 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:22:39 crc kubenswrapper[4820]: I0221 07:22:39.435777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} Feb 21 07:22:40 crc kubenswrapper[4820]: I0221 07:22:40.447796 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" exitCode=0 Feb 21 07:22:40 crc kubenswrapper[4820]: I0221 07:22:40.447856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} Feb 21 07:22:41 crc kubenswrapper[4820]: I0221 07:22:41.461606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerStarted","Data":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} Feb 21 07:22:41 crc kubenswrapper[4820]: I0221 07:22:41.494169 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cdlsm" podStartSLOduration=3.061840848 podStartE2EDuration="5.494146674s" podCreationTimestamp="2026-02-21 07:22:36 +0000 UTC" firstStartedPulling="2026-02-21 07:22:38.428178739 +0000 UTC m=+2133.461262977" lastFinishedPulling="2026-02-21 07:22:40.860484565 +0000 UTC m=+2135.893568803" observedRunningTime="2026-02-21 07:22:41.489506754 +0000 UTC m=+2136.522590982" watchObservedRunningTime="2026-02-21 07:22:41.494146674 +0000 UTC m=+2136.527230882" Feb 21 07:22:43 crc kubenswrapper[4820]: I0221 07:22:43.816658 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:22:43 crc kubenswrapper[4820]: I0221 07:22:43.817087 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.278752 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.279168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.346981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.548510 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:47 crc kubenswrapper[4820]: I0221 07:22:47.594513 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:49 crc kubenswrapper[4820]: I0221 07:22:49.518902 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cdlsm" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" containerID="cri-o://5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" gracePeriod=2 Feb 21 07:22:49 crc kubenswrapper[4820]: I0221 07:22:49.992203 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076594 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.076822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") pod \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\" (UID: \"c54ffd60-01b5-4ac5-9466-eb97debf8fa9\") " Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.077643 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities" (OuterVolumeSpecName: "utilities") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.088929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk" (OuterVolumeSpecName: "kube-api-access-nbldk") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "kube-api-access-nbldk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.115304 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c54ffd60-01b5-4ac5-9466-eb97debf8fa9" (UID: "c54ffd60-01b5-4ac5-9466-eb97debf8fa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178416 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178460 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbldk\" (UniqueName: \"kubernetes.io/projected/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-kube-api-access-nbldk\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.178474 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ffd60-01b5-4ac5-9466-eb97debf8fa9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534668 4820 generic.go:334] "Generic (PLEG): container finished" podID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" exitCode=0 Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdlsm" event={"ID":"c54ffd60-01b5-4ac5-9466-eb97debf8fa9","Type":"ContainerDied","Data":"ce1a9f67f3249b9da77b6bfcf849a1f251744eca44d09eed846c3de212c14f17"} Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534884 4820 scope.go:117] "RemoveContainer" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.534938 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdlsm" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.582574 4820 scope.go:117] "RemoveContainer" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.621828 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.622855 4820 scope.go:117] "RemoveContainer" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.631787 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdlsm"] Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655178 4820 scope.go:117] "RemoveContainer" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.655714 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": container with ID starting with 5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388 not found: ID does not exist" containerID="5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655798 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388"} err="failed to get container status \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": rpc error: code = NotFound desc = could not find container \"5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388\": container with ID starting with 5a81a4ae3bf0fbbee4cf9c4223d5157ccdc58ea69a4333c4bff63e339008c388 not found: ID does not exist" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.655850 4820 scope.go:117] "RemoveContainer" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.656157 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": container with ID starting with 57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4 not found: ID does not exist" containerID="57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656192 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4"} err="failed to get container status \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": rpc error: code = NotFound desc = could not find container \"57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4\": container with ID starting with 57898f296b83b750edb6057f2801496e0112e39f3903d3fc926d1ef77cdf7dd4 not found: ID does not exist" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656213 4820 scope.go:117] "RemoveContainer" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: E0221 07:22:50.656893 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": container with ID starting with 19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6 not found: ID does not exist" containerID="19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6" Feb 21 07:22:50 crc kubenswrapper[4820]: I0221 07:22:50.656921 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6"} err="failed to get container status \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": rpc error: code = NotFound desc = could not find container \"19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6\": container with ID starting with 19792d09ee553c4ac7c40bba139682b38f00d2b03e1cc831732a59f4c4c05bb6 not found: ID does not exist" Feb 21 07:22:51 crc kubenswrapper[4820]: I0221 07:22:51.707552 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" path="/var/lib/kubelet/pods/c54ffd60-01b5-4ac5-9466-eb97debf8fa9/volumes" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.022383 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023234 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-content" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023300 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-content" Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023358 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-utilities" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023376 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="extract-utilities" Feb 21 07:22:56 crc kubenswrapper[4820]: E0221 07:22:56.023399 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023416 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.023757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54ffd60-01b5-4ac5-9466-eb97debf8fa9" containerName="registry-server" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.025736 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.039925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.168607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.269953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270610 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270526 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.270982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.293488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"redhat-operators-rrjdr\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.359220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:22:56 crc kubenswrapper[4820]: I0221 07:22:56.826604 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594512 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" exitCode=0 Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8"} Feb 21 07:22:57 crc kubenswrapper[4820]: I0221 07:22:57.594642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"7dad3769cb5d649f5dc179f5360f48af0dbab75bb74b9c79adf06a61b5a619cb"} Feb 21 07:22:58 crc kubenswrapper[4820]: I0221 07:22:58.604476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} Feb 21 07:22:59 crc kubenswrapper[4820]: I0221 07:22:59.614967 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" exitCode=0 Feb 21 07:22:59 crc kubenswrapper[4820]: I0221 07:22:59.615023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} Feb 21 07:23:00 crc kubenswrapper[4820]: I0221 07:23:00.627291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerStarted","Data":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} Feb 21 07:23:00 crc kubenswrapper[4820]: I0221 07:23:00.656271 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrjdr" podStartSLOduration=3.237053367 podStartE2EDuration="5.656221702s" podCreationTimestamp="2026-02-21 07:22:55 +0000 UTC" firstStartedPulling="2026-02-21 07:22:57.596851307 +0000 UTC m=+2152.629935545" lastFinishedPulling="2026-02-21 07:23:00.016019642 +0000 UTC m=+2155.049103880" observedRunningTime="2026-02-21 07:23:00.649629194 +0000 UTC m=+2155.682713442" watchObservedRunningTime="2026-02-21 07:23:00.656221702 +0000 UTC m=+2155.689305930" Feb 21 07:23:06 crc kubenswrapper[4820]: I0221 07:23:06.359895 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:06 crc kubenswrapper[4820]: I0221 07:23:06.360625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:07 crc kubenswrapper[4820]: I0221 07:23:07.423072 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrjdr" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" probeResult="failure" output=< Feb 21 07:23:07 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:23:07 crc kubenswrapper[4820]: > Feb 21 07:23:13 crc kubenswrapper[4820]: I0221 07:23:13.816084 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:23:13 crc kubenswrapper[4820]: I0221 07:23:13.816689 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.434151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.494304 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:16 crc kubenswrapper[4820]: I0221 07:23:16.691854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:17 crc kubenswrapper[4820]: I0221 07:23:17.751775 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrjdr" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" containerID="cri-o://83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" gracePeriod=2 Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.193528 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.356744 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") pod \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\" (UID: \"b6ab96ec-4842-4dbf-bb94-58ebaac1a551\") " Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.357631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities" (OuterVolumeSpecName: "utilities") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.365426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj" (OuterVolumeSpecName: "kube-api-access-khskj") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "kube-api-access-khskj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.458154 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khskj\" (UniqueName: \"kubernetes.io/projected/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-kube-api-access-khskj\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.458194 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.527641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6ab96ec-4842-4dbf-bb94-58ebaac1a551" (UID: "b6ab96ec-4842-4dbf-bb94-58ebaac1a551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.559703 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6ab96ec-4842-4dbf-bb94-58ebaac1a551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763126 4820 generic.go:334] "Generic (PLEG): container finished" podID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" exitCode=0 Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763306 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrjdr" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763328 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrjdr" event={"ID":"b6ab96ec-4842-4dbf-bb94-58ebaac1a551","Type":"ContainerDied","Data":"7dad3769cb5d649f5dc179f5360f48af0dbab75bb74b9c79adf06a61b5a619cb"} Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.763358 4820 scope.go:117] "RemoveContainer" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.791586 4820 scope.go:117] "RemoveContainer" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.822576 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.837119 4820 scope.go:117] "RemoveContainer" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.838574 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrjdr"] Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.859216 4820 scope.go:117] "RemoveContainer" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.860202 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": container with ID starting with 83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660 not found: ID does not exist" containerID="83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860295 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660"} err="failed to get container status \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": rpc error: code = NotFound desc = could not find container \"83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660\": container with ID starting with 83021cd4bf2acb8ac1b96b3d51f4b4ca5da0c66a9f4b2bcfc9f70bcc871de660 not found: ID does not exist" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860370 4820 scope.go:117] "RemoveContainer" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.860866 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": container with ID starting with 5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6 not found: ID does not exist" containerID="5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860945 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6"} err="failed to get container status \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": rpc error: code = NotFound desc = could not find container \"5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6\": container with ID starting with 5816f0a6cc317725bb29660886a36c244f83ea053d8e5313a61502e3c4108fd6 not found: ID does not exist" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.860990 4820 scope.go:117] "RemoveContainer" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: E0221 07:23:18.861768 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": container with ID starting with 662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8 not found: ID does not exist" containerID="662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8" Feb 21 07:23:18 crc kubenswrapper[4820]: I0221 07:23:18.861803 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8"} err="failed to get container status \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": rpc error: code = NotFound desc = could not find container \"662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8\": container with ID starting with 662d26bb0b5e863f1ab39c55bc3d10ae90e5a13e1ba4305022f2e70a8c9e17e8 not found: ID does not exist" Feb 21 07:23:19 crc kubenswrapper[4820]: I0221 07:23:19.713931 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" path="/var/lib/kubelet/pods/b6ab96ec-4842-4dbf-bb94-58ebaac1a551/volumes" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.816752 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817270 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817875 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.817922 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" gracePeriod=600 Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9"} Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994370 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" exitCode=0 Feb 21 07:23:43 crc kubenswrapper[4820]: I0221 07:23:43.994859 4820 scope.go:117] "RemoveContainer" containerID="df0cb69f2e7db1ffb44415b02a0bde4e3bd756653ae38f232efa5ab0d2dea6e4" Feb 21 07:23:45 crc kubenswrapper[4820]: I0221 07:23:45.004034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.618078 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619360 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-utilities" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619385 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-utilities" Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619440 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619454 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: E0221 07:24:17.619480 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-content" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619494 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="extract-content" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.619805 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ab96ec-4842-4dbf-bb94-58ebaac1a551" containerName="registry-server" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.621979 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.638317 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.749590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851343 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.851542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.852573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.852908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.886346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"community-operators-v7ml8\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:17 crc kubenswrapper[4820]: I0221 07:24:17.947633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:18 crc kubenswrapper[4820]: I0221 07:24:18.488139 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.259892 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" exitCode=0 Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.259965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2"} Feb 21 07:24:19 crc kubenswrapper[4820]: I0221 07:24:19.260209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"6bc53e469972753514545b99cd59ba9fb24a9e09aeb649985dd2366cd22715e8"} Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.190981 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.192583 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.200886 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.268126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.285723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.386834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.386957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.387012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.387928 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.388098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.429135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"certified-operators-nz2j4\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.518925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:20 crc kubenswrapper[4820]: I0221 07:24:20.964371 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.278115 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" exitCode=0 Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.278204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280834 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" exitCode=0 Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab"} Feb 21 07:24:21 crc kubenswrapper[4820]: I0221 07:24:21.280890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"2963a33f03df3165cacdeb753981d1b27c38d9d369803edadbd56752c233cb3f"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.289827 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.292786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerStarted","Data":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} Feb 21 07:24:22 crc kubenswrapper[4820]: I0221 07:24:22.340060 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7ml8" podStartSLOduration=2.788103926 podStartE2EDuration="5.340043884s" podCreationTimestamp="2026-02-21 07:24:17 +0000 UTC" firstStartedPulling="2026-02-21 07:24:19.264586622 +0000 UTC m=+2234.297670820" lastFinishedPulling="2026-02-21 07:24:21.81652656 +0000 UTC m=+2236.849610778" observedRunningTime="2026-02-21 07:24:22.337931246 +0000 UTC m=+2237.371015444" watchObservedRunningTime="2026-02-21 07:24:22.340043884 +0000 UTC m=+2237.373128082" Feb 21 07:24:23 crc kubenswrapper[4820]: I0221 07:24:23.300650 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" exitCode=0 Feb 21 07:24:23 crc kubenswrapper[4820]: I0221 07:24:23.300719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} Feb 21 07:24:24 crc kubenswrapper[4820]: I0221 07:24:24.323311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerStarted","Data":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} Feb 21 07:24:24 crc kubenswrapper[4820]: I0221 07:24:24.347024 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nz2j4" podStartSLOduration=1.947210765 podStartE2EDuration="4.347008415s" podCreationTimestamp="2026-02-21 07:24:20 +0000 UTC" firstStartedPulling="2026-02-21 07:24:21.282129441 +0000 UTC m=+2236.315213649" lastFinishedPulling="2026-02-21 07:24:23.681927101 +0000 UTC m=+2238.715011299" observedRunningTime="2026-02-21 07:24:24.342043751 +0000 UTC m=+2239.375127949" watchObservedRunningTime="2026-02-21 07:24:24.347008415 +0000 UTC m=+2239.380092613" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.948034 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.948982 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:27 crc kubenswrapper[4820]: I0221 07:24:27.987482 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:28 crc kubenswrapper[4820]: I0221 07:24:28.396351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:28 crc kubenswrapper[4820]: I0221 07:24:28.581533 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.370364 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7ml8" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" containerID="cri-o://a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" gracePeriod=2 Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.519656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.519716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.587622 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.785571 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848772 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.848891 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") pod \"133ffeb7-28b1-4e97-a617-84328eac0f17\" (UID: \"133ffeb7-28b1-4e97-a617-84328eac0f17\") " Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.850091 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities" (OuterVolumeSpecName: "utilities") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.853892 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr" (OuterVolumeSpecName: "kube-api-access-kmddr") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "kube-api-access-kmddr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.898877 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "133ffeb7-28b1-4e97-a617-84328eac0f17" (UID: "133ffeb7-28b1-4e97-a617-84328eac0f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949716 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmddr\" (UniqueName: \"kubernetes.io/projected/133ffeb7-28b1-4e97-a617-84328eac0f17-kube-api-access-kmddr\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949753 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:30 crc kubenswrapper[4820]: I0221 07:24:30.949762 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ffeb7-28b1-4e97-a617-84328eac0f17-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381208 4820 generic.go:334] "Generic (PLEG): container finished" podID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" exitCode=0 Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381329 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7ml8" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381393 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7ml8" event={"ID":"133ffeb7-28b1-4e97-a617-84328eac0f17","Type":"ContainerDied","Data":"6bc53e469972753514545b99cd59ba9fb24a9e09aeb649985dd2366cd22715e8"} Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.381417 4820 scope.go:117] "RemoveContainer" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.399444 4820 scope.go:117] "RemoveContainer" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.421419 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.428416 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7ml8"] Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.433483 4820 scope.go:117] "RemoveContainer" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.450124 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.457500 4820 scope.go:117] "RemoveContainer" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.458072 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": container with ID starting with a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad not found: ID does not exist" containerID="a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458128 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad"} err="failed to get container status \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": rpc error: code = NotFound desc = could not find container \"a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad\": container with ID starting with a9eb42faeb545bef21f16e8f736fdd4ccb63968dfd7105963d9af70ef8719dad not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458155 4820 scope.go:117] "RemoveContainer" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.458620 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": container with ID starting with 77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0 not found: ID does not exist" containerID="77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458766 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0"} err="failed to get container status \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": rpc error: code = NotFound desc = could not find container \"77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0\": container with ID starting with 77b21d763827ac83962dfaa38d31be379d99686a57141bc23628bf7a368538d0 not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.458813 4820 scope.go:117] "RemoveContainer" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: E0221 07:24:31.459169 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": container with ID starting with bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2 not found: ID does not exist" containerID="bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.459198 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2"} err="failed to get container status \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": rpc error: code = NotFound desc = could not find container \"bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2\": container with ID starting with bcace42703c286157c71b4e0938f7696e4b4c3e4e0f08b55c3f6792bfabdf5a2 not found: ID does not exist" Feb 21 07:24:31 crc kubenswrapper[4820]: I0221 07:24:31.710908 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" path="/var/lib/kubelet/pods/133ffeb7-28b1-4e97-a617-84328eac0f17/volumes" Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.583331 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.583845 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nz2j4" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" containerID="cri-o://d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" gracePeriod=2 Feb 21 07:24:33 crc kubenswrapper[4820]: I0221 07:24:33.964945 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098012 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098190 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.098948 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities" (OuterVolumeSpecName: "utilities") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.099037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") pod \"0affc452-556a-4307-9201-fed39571b1d0\" (UID: \"0affc452-556a-4307-9201-fed39571b1d0\") " Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.100453 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.103745 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc" (OuterVolumeSpecName: "kube-api-access-zvrkc") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "kube-api-access-zvrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.154915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0affc452-556a-4307-9201-fed39571b1d0" (UID: "0affc452-556a-4307-9201-fed39571b1d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.200896 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvrkc\" (UniqueName: \"kubernetes.io/projected/0affc452-556a-4307-9201-fed39571b1d0-kube-api-access-zvrkc\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.200927 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0affc452-556a-4307-9201-fed39571b1d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405773 4820 generic.go:334] "Generic (PLEG): container finished" podID="0affc452-556a-4307-9201-fed39571b1d0" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" exitCode=0 Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405868 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz2j4" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.405866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.406289 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz2j4" event={"ID":"0affc452-556a-4307-9201-fed39571b1d0","Type":"ContainerDied","Data":"2963a33f03df3165cacdeb753981d1b27c38d9d369803edadbd56752c233cb3f"} Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.406315 4820 scope.go:117] "RemoveContainer" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.427191 4820 scope.go:117] "RemoveContainer" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.445178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.451326 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nz2j4"] Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.462153 4820 scope.go:117] "RemoveContainer" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475323 4820 scope.go:117] "RemoveContainer" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.475585 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": container with ID starting with d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365 not found: ID does not exist" containerID="d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475692 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365"} err="failed to get container status \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": rpc error: code = NotFound desc = could not find container \"d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365\": container with ID starting with d7ca495c0ecd336897e6715a59ef3b7a684370592e522e2a442cc00d9e009365 not found: ID does not exist" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.475791 4820 scope.go:117] "RemoveContainer" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.476191 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": container with ID starting with 5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b not found: ID does not exist" containerID="5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476215 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b"} err="failed to get container status \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": rpc error: code = NotFound desc = could not find container \"5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b\": container with ID starting with 5ca282f91e562fd745bd56e50c871f14ce9ce62e7b0863577792da29e461ff9b not found: ID does not exist" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476229 4820 scope.go:117] "RemoveContainer" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: E0221 07:24:34.476521 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": container with ID starting with f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab not found: ID does not exist" containerID="f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab" Feb 21 07:24:34 crc kubenswrapper[4820]: I0221 07:24:34.476601 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab"} err="failed to get container status \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": rpc error: code = NotFound desc = could not find container \"f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab\": container with ID starting with f59d2c6a324b9d85fca247b6101db07cc04fd887f4da9b10715bf53d324a3cab not found: ID does not exist" Feb 21 07:24:35 crc kubenswrapper[4820]: I0221 07:24:35.713743 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0affc452-556a-4307-9201-fed39571b1d0" path="/var/lib/kubelet/pods/0affc452-556a-4307-9201-fed39571b1d0/volumes" Feb 21 07:26:13 crc kubenswrapper[4820]: I0221 07:26:13.816047 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:26:13 crc kubenswrapper[4820]: I0221 07:26:13.816592 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:26:43 crc kubenswrapper[4820]: I0221 07:26:43.816540 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:26:43 crc kubenswrapper[4820]: I0221 07:26:43.818694 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.816579 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817165 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.817925 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:27:13 crc kubenswrapper[4820]: I0221 07:27:13.818002 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" gracePeriod=600 Feb 21 07:27:13 crc kubenswrapper[4820]: E0221 07:27:13.941523 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730162 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" exitCode=0 Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799"} Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730258 4820 scope.go:117] "RemoveContainer" containerID="864923974d73bbf665c1bb371fd49b8c1f45b2b5f96e7f7de515bffdc15084f9" Feb 21 07:27:14 crc kubenswrapper[4820]: I0221 07:27:14.730735 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:14 crc kubenswrapper[4820]: E0221 07:27:14.730952 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:25 crc kubenswrapper[4820]: I0221 07:27:25.702518 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:25 crc kubenswrapper[4820]: E0221 07:27:25.703583 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:36 crc kubenswrapper[4820]: I0221 07:27:36.696386 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:36 crc kubenswrapper[4820]: E0221 07:27:36.697111 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:27:50 crc kubenswrapper[4820]: I0221 07:27:50.696688 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:27:50 crc kubenswrapper[4820]: E0221 07:27:50.697610 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:05 crc kubenswrapper[4820]: I0221 07:28:05.701880 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:05 crc kubenswrapper[4820]: E0221 07:28:05.702565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:17 crc kubenswrapper[4820]: I0221 07:28:17.697919 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:17 crc kubenswrapper[4820]: E0221 07:28:17.698634 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:29 crc kubenswrapper[4820]: I0221 07:28:29.697037 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:29 crc kubenswrapper[4820]: E0221 07:28:29.697884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:41 crc kubenswrapper[4820]: I0221 07:28:41.697320 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:41 crc kubenswrapper[4820]: E0221 07:28:41.698565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:28:56 crc kubenswrapper[4820]: I0221 07:28:56.697354 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:28:56 crc kubenswrapper[4820]: E0221 07:28:56.699683 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:07 crc kubenswrapper[4820]: I0221 07:29:07.697093 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:07 crc kubenswrapper[4820]: E0221 07:29:07.698338 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:21 crc kubenswrapper[4820]: I0221 07:29:21.698044 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:21 crc kubenswrapper[4820]: E0221 07:29:21.699466 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:32 crc kubenswrapper[4820]: I0221 07:29:32.697350 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:32 crc kubenswrapper[4820]: E0221 07:29:32.698595 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:43 crc kubenswrapper[4820]: I0221 07:29:43.697060 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:43 crc kubenswrapper[4820]: E0221 07:29:43.697817 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:29:56 crc kubenswrapper[4820]: I0221 07:29:56.697014 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:29:56 crc kubenswrapper[4820]: E0221 07:29:56.697748 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.180817 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.183079 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.183491 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.183739 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.183937 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.184174 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.184417 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="extract-content" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.184635 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.184828 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.185070 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.185318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: E0221 07:30:00.185537 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.185722 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="extract-utilities" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.186953 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="133ffeb7-28b1-4e97-a617-84328eac0f17" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.187223 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0affc452-556a-4307-9201-fed39571b1d0" containerName="registry-server" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.188523 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.192404 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.193808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.194050 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245434 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.245584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.346933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347688 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.347774 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.359574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.366123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"collect-profiles-29527650-5nntb\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.514109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:00 crc kubenswrapper[4820]: I0221 07:30:00.980563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 07:30:01 crc kubenswrapper[4820]: I0221 07:30:01.133958 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerStarted","Data":"797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc"} Feb 21 07:30:02 crc kubenswrapper[4820]: I0221 07:30:02.148020 4820 generic.go:334] "Generic (PLEG): container finished" podID="9686bf95-baf7-4066-8769-66f168be0215" containerID="c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f" exitCode=0 Feb 21 07:30:02 crc kubenswrapper[4820]: I0221 07:30:02.148121 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerDied","Data":"c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f"} Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.446422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.594124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.594205 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume" (OuterVolumeSpecName: "config-volume") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595211 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") pod \"9686bf95-baf7-4066-8769-66f168be0215\" (UID: \"9686bf95-baf7-4066-8769-66f168be0215\") " Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.595408 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9686bf95-baf7-4066-8769-66f168be0215-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.600847 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.601116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n" (OuterVolumeSpecName: "kube-api-access-5669n") pod "9686bf95-baf7-4066-8769-66f168be0215" (UID: "9686bf95-baf7-4066-8769-66f168be0215"). InnerVolumeSpecName "kube-api-access-5669n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.696976 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9686bf95-baf7-4066-8769-66f168be0215-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:03 crc kubenswrapper[4820]: I0221 07:30:03.697072 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5669n\" (UniqueName: \"kubernetes.io/projected/9686bf95-baf7-4066-8769-66f168be0215-kube-api-access-5669n\") on node \"crc\" DevicePath \"\"" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177029 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" event={"ID":"9686bf95-baf7-4066-8769-66f168be0215","Type":"ContainerDied","Data":"797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc"} Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177086 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797b9f6f306dee486593dfb28bed25626861fabec5ee5e0d93c1a16dafdc8bfc" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.177123 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb" Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.542383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 07:30:04 crc kubenswrapper[4820]: I0221 07:30:04.554146 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527605-6xqg9"] Feb 21 07:30:05 crc kubenswrapper[4820]: I0221 07:30:05.706778 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b009b00-dfa6-40ba-b629-608fc71dc429" path="/var/lib/kubelet/pods/0b009b00-dfa6-40ba-b629-608fc71dc429/volumes" Feb 21 07:30:11 crc kubenswrapper[4820]: I0221 07:30:11.697947 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:11 crc kubenswrapper[4820]: E0221 07:30:11.698658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:16 crc kubenswrapper[4820]: I0221 07:30:16.986825 4820 scope.go:117] "RemoveContainer" containerID="d8fad70d0ffc026935b7857a9983aa7bde367f1ccdb48c593f103452b34e3bae" Feb 21 07:30:22 crc kubenswrapper[4820]: I0221 07:30:22.696999 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:22 crc kubenswrapper[4820]: E0221 07:30:22.697763 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:33 crc kubenswrapper[4820]: I0221 07:30:33.696574 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:33 crc kubenswrapper[4820]: E0221 07:30:33.697725 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:45 crc kubenswrapper[4820]: I0221 07:30:45.704299 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:45 crc kubenswrapper[4820]: E0221 07:30:45.705309 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:30:57 crc kubenswrapper[4820]: I0221 07:30:57.697471 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:30:57 crc kubenswrapper[4820]: E0221 07:30:57.699907 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:09 crc kubenswrapper[4820]: I0221 07:31:09.697857 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:09 crc kubenswrapper[4820]: E0221 07:31:09.698934 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:21 crc kubenswrapper[4820]: I0221 07:31:21.696961 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:21 crc kubenswrapper[4820]: E0221 07:31:21.698311 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:34 crc kubenswrapper[4820]: I0221 07:31:34.697239 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:34 crc kubenswrapper[4820]: E0221 07:31:34.698315 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:46 crc kubenswrapper[4820]: I0221 07:31:46.697298 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:46 crc kubenswrapper[4820]: E0221 07:31:46.698478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:31:57 crc kubenswrapper[4820]: I0221 07:31:57.697277 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:31:57 crc kubenswrapper[4820]: E0221 07:31:57.698020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:32:12 crc kubenswrapper[4820]: I0221 07:32:12.697039 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:32:12 crc kubenswrapper[4820]: E0221 07:32:12.697741 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:32:25 crc kubenswrapper[4820]: I0221 07:32:25.701223 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:32:26 crc kubenswrapper[4820]: I0221 07:32:26.455895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.966355 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:25 crc kubenswrapper[4820]: E0221 07:33:25.967813 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.967836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.968458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9686bf95-baf7-4066-8769-66f168be0215" containerName="collect-profiles" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.976087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:25 crc kubenswrapper[4820]: I0221 07:33:25.992197 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.089602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190671 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.190792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.191285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.191485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.216033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"redhat-operators-ddb8j\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.320161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.721209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.936693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d"} Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.936738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"800f5073be14762000edf6d05d7997d9f766e39dadc5e37374ca63e0465e3c6c"} Feb 21 07:33:26 crc kubenswrapper[4820]: I0221 07:33:26.938471 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945039 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d" exitCode=0 Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d"} Feb 21 07:33:27 crc kubenswrapper[4820]: I0221 07:33:27.945608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110"} Feb 21 07:33:28 crc kubenswrapper[4820]: I0221 07:33:28.957180 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110" exitCode=0 Feb 21 07:33:28 crc kubenswrapper[4820]: I0221 07:33:28.957314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110"} Feb 21 07:33:29 crc kubenswrapper[4820]: I0221 07:33:29.964434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerStarted","Data":"a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05"} Feb 21 07:33:29 crc kubenswrapper[4820]: I0221 07:33:29.988002 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddb8j" podStartSLOduration=2.38307466 podStartE2EDuration="4.987969802s" podCreationTimestamp="2026-02-21 07:33:25 +0000 UTC" firstStartedPulling="2026-02-21 07:33:26.938294092 +0000 UTC m=+2781.971378290" lastFinishedPulling="2026-02-21 07:33:29.543189244 +0000 UTC m=+2784.576273432" observedRunningTime="2026-02-21 07:33:29.97872409 +0000 UTC m=+2785.011808308" watchObservedRunningTime="2026-02-21 07:33:29.987969802 +0000 UTC m=+2785.021054040" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.321381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.323736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:36 crc kubenswrapper[4820]: I0221 07:33:36.395096 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:37 crc kubenswrapper[4820]: I0221 07:33:37.090611 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:37 crc kubenswrapper[4820]: I0221 07:33:37.164475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:39 crc kubenswrapper[4820]: I0221 07:33:39.033457 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddb8j" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" containerID="cri-o://a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" gracePeriod=2 Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.045351 4820 generic.go:334] "Generic (PLEG): container finished" podID="aee28481-4767-447d-97ea-0c0a44652ec4" containerID="a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" exitCode=0 Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.045442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05"} Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.613789 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.718517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.718647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.720462 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") pod \"aee28481-4767-447d-97ea-0c0a44652ec4\" (UID: \"aee28481-4767-447d-97ea-0c0a44652ec4\") " Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.720866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities" (OuterVolumeSpecName: "utilities") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.721514 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.729436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw" (OuterVolumeSpecName: "kube-api-access-ppggw") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "kube-api-access-ppggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.822664 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppggw\" (UniqueName: \"kubernetes.io/projected/aee28481-4767-447d-97ea-0c0a44652ec4-kube-api-access-ppggw\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.922992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aee28481-4767-447d-97ea-0c0a44652ec4" (UID: "aee28481-4767-447d-97ea-0c0a44652ec4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:33:40 crc kubenswrapper[4820]: I0221 07:33:40.924933 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aee28481-4767-447d-97ea-0c0a44652ec4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddb8j" event={"ID":"aee28481-4767-447d-97ea-0c0a44652ec4","Type":"ContainerDied","Data":"800f5073be14762000edf6d05d7997d9f766e39dadc5e37374ca63e0465e3c6c"} Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057747 4820 scope.go:117] "RemoveContainer" containerID="a32153d3547773141c51f2cb185f7504065f914ef1fcc9ece5a1aae392a7cc05" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.057779 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddb8j" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.083794 4820 scope.go:117] "RemoveContainer" containerID="9b734c4b7bfc0bd8eb4f8bede006aaba55f7a0ed1b3a1d52987d8f2f7062a110" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.122894 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.130363 4820 scope.go:117] "RemoveContainer" containerID="ce634a88568996f1f99baa3f9008633c8002c6661b813babac3389533e88553d" Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.135576 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddb8j"] Feb 21 07:33:41 crc kubenswrapper[4820]: I0221 07:33:41.711571 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" path="/var/lib/kubelet/pods/aee28481-4767-447d-97ea-0c0a44652ec4/volumes" Feb 21 07:34:43 crc kubenswrapper[4820]: I0221 07:34:43.817351 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:34:43 crc kubenswrapper[4820]: I0221 07:34:43.818078 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.801185 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802208 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-utilities" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802229 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-utilities" Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802296 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-content" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="extract-content" Feb 21 07:34:49 crc kubenswrapper[4820]: E0221 07:34:49.802334 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.802556 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee28481-4767-447d-97ea-0c0a44652ec4" containerName="registry-server" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.804357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.814609 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:49 crc kubenswrapper[4820]: I0221 07:34:49.973874 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075269 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.075874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.100651 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"community-operators-69wjz\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.175097 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:34:50 crc kubenswrapper[4820]: I0221 07:34:50.661634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.652610 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4" exitCode=0 Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.652768 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4"} Feb 21 07:34:51 crc kubenswrapper[4820]: I0221 07:34:51.653199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerStarted","Data":"daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2"} Feb 21 07:34:52 crc kubenswrapper[4820]: I0221 07:34:52.667857 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0" exitCode=0 Feb 21 07:34:52 crc kubenswrapper[4820]: I0221 07:34:52.667917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0"} Feb 21 07:34:53 crc kubenswrapper[4820]: I0221 07:34:53.681943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerStarted","Data":"2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053"} Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.175512 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.176202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.258550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.283123 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69wjz" podStartSLOduration=9.886325012 podStartE2EDuration="11.283094047s" podCreationTimestamp="2026-02-21 07:34:49 +0000 UTC" firstStartedPulling="2026-02-21 07:34:51.655960057 +0000 UTC m=+2866.689044295" lastFinishedPulling="2026-02-21 07:34:53.052729112 +0000 UTC m=+2868.085813330" observedRunningTime="2026-02-21 07:34:53.708473643 +0000 UTC m=+2868.741557882" watchObservedRunningTime="2026-02-21 07:35:00.283094047 +0000 UTC m=+2875.316178275" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.814728 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:00 crc kubenswrapper[4820]: I0221 07:35:00.892023 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:02 crc kubenswrapper[4820]: I0221 07:35:02.757113 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69wjz" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" containerID="cri-o://2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" gracePeriod=2 Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.767951 4820 generic.go:334] "Generic (PLEG): container finished" podID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerID="2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" exitCode=0 Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053"} Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69wjz" event={"ID":"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf","Type":"ContainerDied","Data":"daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2"} Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.768478 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf2c081b5e68ff2a466b3c60fde92970208c2de87ccc3cdf34358aa744193e2" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.769633 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913249 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913327 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.913451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") pod \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\" (UID: \"47cc3fdc-9559-4ca5-940a-40b6efdcd5cf\") " Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.914978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities" (OuterVolumeSpecName: "utilities") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.920032 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x" (OuterVolumeSpecName: "kube-api-access-m2g8x") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "kube-api-access-m2g8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:35:03 crc kubenswrapper[4820]: I0221 07:35:03.967393 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" (UID: "47cc3fdc-9559-4ca5-940a-40b6efdcd5cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015606 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2g8x\" (UniqueName: \"kubernetes.io/projected/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-kube-api-access-m2g8x\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015823 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.015910 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.778661 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69wjz" Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.832575 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:04 crc kubenswrapper[4820]: I0221 07:35:04.839317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69wjz"] Feb 21 07:35:05 crc kubenswrapper[4820]: I0221 07:35:05.711986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" path="/var/lib/kubelet/pods/47cc3fdc-9559-4ca5-940a-40b6efdcd5cf/volumes" Feb 21 07:35:13 crc kubenswrapper[4820]: I0221 07:35:13.816030 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:35:13 crc kubenswrapper[4820]: I0221 07:35:13.816716 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.816264 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.816941 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.817027 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.818048 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:35:43 crc kubenswrapper[4820]: I0221 07:35:43.818158 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" gracePeriod=600 Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.123835 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" exitCode=0 Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.123911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5"} Feb 21 07:35:44 crc kubenswrapper[4820]: I0221 07:35:44.124039 4820 scope.go:117] "RemoveContainer" containerID="4807ffd8fcaa2a491855d7ab44f4ff59cf61bc73c159b81740acd93c54130799" Feb 21 07:35:45 crc kubenswrapper[4820]: I0221 07:35:45.136186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} Feb 21 07:38:13 crc kubenswrapper[4820]: I0221 07:38:13.816063 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:38:13 crc kubenswrapper[4820]: I0221 07:38:13.816710 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:38:43 crc kubenswrapper[4820]: I0221 07:38:43.816628 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:38:43 crc kubenswrapper[4820]: I0221 07:38:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.816514 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817065 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.817992 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:39:13 crc kubenswrapper[4820]: I0221 07:39:13.818068 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" gracePeriod=600 Feb 21 07:39:13 crc kubenswrapper[4820]: E0221 07:39:13.966616 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973"} Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043250 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" exitCode=0 Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043290 4820 scope.go:117] "RemoveContainer" containerID="490b6ac30893f64f1d044dba3e009d0873f7adb91481587baa0c783e1b2f2af5" Feb 21 07:39:14 crc kubenswrapper[4820]: I0221 07:39:14.043732 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:14 crc kubenswrapper[4820]: E0221 07:39:14.043976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:28 crc kubenswrapper[4820]: I0221 07:39:28.696513 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:28 crc kubenswrapper[4820]: E0221 07:39:28.697469 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:42 crc kubenswrapper[4820]: I0221 07:39:42.697125 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:42 crc kubenswrapper[4820]: E0221 07:39:42.698170 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:39:53 crc kubenswrapper[4820]: I0221 07:39:53.697655 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:39:53 crc kubenswrapper[4820]: E0221 07:39:53.698848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:05 crc kubenswrapper[4820]: I0221 07:40:05.701962 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:05 crc kubenswrapper[4820]: E0221 07:40:05.702890 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.534523 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535623 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535646 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-utilities" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535683 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-utilities" Feb 21 07:40:16 crc kubenswrapper[4820]: E0221 07:40:16.535756 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-content" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.535773 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="extract-content" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.536072 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cc3fdc-9559-4ca5-940a-40b6efdcd5cf" containerName="registry-server" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.537751 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.557194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.686392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.723661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.724940 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.746714 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787427 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.787933 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.788019 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.809125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"certified-operators-7kl7k\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.872069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.888743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.991686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:16 crc kubenswrapper[4820]: I0221 07:40:16.992427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:16.995119 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.025382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"redhat-marketplace-l84vx\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.086725 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.345912 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.553298 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:17 crc kubenswrapper[4820]: W0221 07:40:17.575807 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7f39e73_ac13_401b_8b13_6b43964609cf.slice/crio-b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e WatchSource:0}: Error finding container b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e: Status 404 returned error can't find the container with id b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.686989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerStarted","Data":"b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689691 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c" exitCode=0 Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.689828 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"635db501838d4e42233fed604e1328a175ce679ed6d00e8bd80c7b6b2b676d72"} Feb 21 07:40:17 crc kubenswrapper[4820]: I0221 07:40:17.691758 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.718192 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8" exitCode=0 Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.718654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8"} Feb 21 07:40:18 crc kubenswrapper[4820]: I0221 07:40:18.726147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005"} Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.738614 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005" exitCode=0 Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.738701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005"} Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.743319 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448" exitCode=0 Feb 21 07:40:19 crc kubenswrapper[4820]: I0221 07:40:19.743352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.696789 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:20 crc kubenswrapper[4820]: E0221 07:40:20.697542 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.753712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerStarted","Data":"ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.756579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerStarted","Data":"e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d"} Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.783162 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7kl7k" podStartSLOduration=2.3739966949999998 podStartE2EDuration="4.783141431s" podCreationTimestamp="2026-02-21 07:40:16 +0000 UTC" firstStartedPulling="2026-02-21 07:40:17.691164628 +0000 UTC m=+3192.724248876" lastFinishedPulling="2026-02-21 07:40:20.100309414 +0000 UTC m=+3195.133393612" observedRunningTime="2026-02-21 07:40:20.77936892 +0000 UTC m=+3195.812453128" watchObservedRunningTime="2026-02-21 07:40:20.783141431 +0000 UTC m=+3195.816225639" Feb 21 07:40:20 crc kubenswrapper[4820]: I0221 07:40:20.809876 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l84vx" podStartSLOduration=3.250280036 podStartE2EDuration="4.809854517s" podCreationTimestamp="2026-02-21 07:40:16 +0000 UTC" firstStartedPulling="2026-02-21 07:40:18.723193366 +0000 UTC m=+3193.756277594" lastFinishedPulling="2026-02-21 07:40:20.282767847 +0000 UTC m=+3195.315852075" observedRunningTime="2026-02-21 07:40:20.803770442 +0000 UTC m=+3195.836854650" watchObservedRunningTime="2026-02-21 07:40:20.809854517 +0000 UTC m=+3195.842938725" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.873077 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.873618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:26 crc kubenswrapper[4820]: I0221 07:40:26.938541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.087576 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.087643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.159772 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.864669 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:27 crc kubenswrapper[4820]: I0221 07:40:27.874850 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:29 crc kubenswrapper[4820]: I0221 07:40:29.327634 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:29 crc kubenswrapper[4820]: I0221 07:40:29.833996 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l84vx" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" containerID="cri-o://e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" gracePeriod=2 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.324992 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.325512 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7kl7k" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" containerID="cri-o://ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" gracePeriod=2 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.845786 4820 generic.go:334] "Generic (PLEG): container finished" podID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerID="ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" exitCode=0 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.845897 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce"} Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.849664 4820 generic.go:334] "Generic (PLEG): container finished" podID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerID="e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" exitCode=0 Feb 21 07:40:30 crc kubenswrapper[4820]: I0221 07:40:30.849752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.215287 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319073 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.319206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") pod \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\" (UID: \"a87b0b35-2855-41c6-be2a-02ec21e4f76c\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.320047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities" (OuterVolumeSpecName: "utilities") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.322631 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.324746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v" (OuterVolumeSpecName: "kube-api-access-jd59v") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "kube-api-access-jd59v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.387705 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a87b0b35-2855-41c6-be2a-02ec21e4f76c" (UID: "a87b0b35-2855-41c6-be2a-02ec21e4f76c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.419997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.420312 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.420509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") pod \"a7f39e73-ac13-401b-8b13-6b43964609cf\" (UID: \"a7f39e73-ac13-401b-8b13-6b43964609cf\") " Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421072 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421178 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87b0b35-2855-41c6-be2a-02ec21e4f76c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421291 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd59v\" (UniqueName: \"kubernetes.io/projected/a87b0b35-2855-41c6-be2a-02ec21e4f76c-kube-api-access-jd59v\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.421303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities" (OuterVolumeSpecName: "utilities") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.423362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7" (OuterVolumeSpecName: "kube-api-access-mcrs7") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "kube-api-access-mcrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.449572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f39e73-ac13-401b-8b13-6b43964609cf" (UID: "a7f39e73-ac13-401b-8b13-6b43964609cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522489 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrs7\" (UniqueName: \"kubernetes.io/projected/a7f39e73-ac13-401b-8b13-6b43964609cf-kube-api-access-mcrs7\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522545 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.522566 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f39e73-ac13-401b-8b13-6b43964609cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7kl7k" event={"ID":"a87b0b35-2855-41c6-be2a-02ec21e4f76c","Type":"ContainerDied","Data":"635db501838d4e42233fed604e1328a175ce679ed6d00e8bd80c7b6b2b676d72"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7kl7k" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.861122 4820 scope.go:117] "RemoveContainer" containerID="ecf07856d3b4f50ed0834670df00de7e386b44bafd0ea8f709b6d99d023a98ce" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.866630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l84vx" event={"ID":"a7f39e73-ac13-401b-8b13-6b43964609cf","Type":"ContainerDied","Data":"b1ed9a17504a7d60c3cf22c1fe22d6c87f221bc8dba2bac68fbbf21c748a7b8e"} Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.866851 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l84vx" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.895807 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.923307 4820 scope.go:117] "RemoveContainer" containerID="6e35621c7845230e7db05bdce58acb2fe25ff4ba7283b024c2f73621e9e64005" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.928431 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7kl7k"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.937381 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.942271 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l84vx"] Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.945355 4820 scope.go:117] "RemoveContainer" containerID="c39f50a678425e9cc0fcddc26b8691457a1645406c597144548b9a01c6ce923c" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.965464 4820 scope.go:117] "RemoveContainer" containerID="e74d4968e76f0baf85708fd8be7c66b5ddb9ed59306f224a9f0d79784c0b424d" Feb 21 07:40:31 crc kubenswrapper[4820]: I0221 07:40:31.983723 4820 scope.go:117] "RemoveContainer" containerID="5de55d91ad6a8e889320b2deff3ae550b0877b49c4dea85e11f0079996260448" Feb 21 07:40:32 crc kubenswrapper[4820]: I0221 07:40:32.003134 4820 scope.go:117] "RemoveContainer" containerID="097e6b83ef8daa21dbd26a23bbbff42fe5299e2430ed1d3e0afdfd1e974e37c8" Feb 21 07:40:33 crc kubenswrapper[4820]: I0221 07:40:33.710901 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" path="/var/lib/kubelet/pods/a7f39e73-ac13-401b-8b13-6b43964609cf/volumes" Feb 21 07:40:33 crc kubenswrapper[4820]: I0221 07:40:33.712631 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" path="/var/lib/kubelet/pods/a87b0b35-2855-41c6-be2a-02ec21e4f76c/volumes" Feb 21 07:40:35 crc kubenswrapper[4820]: I0221 07:40:35.701690 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:35 crc kubenswrapper[4820]: E0221 07:40:35.701948 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:40:49 crc kubenswrapper[4820]: I0221 07:40:49.696296 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:40:49 crc kubenswrapper[4820]: E0221 07:40:49.696991 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:00 crc kubenswrapper[4820]: I0221 07:41:00.696855 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:00 crc kubenswrapper[4820]: E0221 07:41:00.698940 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:13 crc kubenswrapper[4820]: I0221 07:41:13.696477 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:13 crc kubenswrapper[4820]: E0221 07:41:13.697513 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.277117 4820 scope.go:117] "RemoveContainer" containerID="2a132b162374d1bf952b4a2206ecdea043fde586e261063d25366c794555b053" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.314345 4820 scope.go:117] "RemoveContainer" containerID="def41a6eec93a17715a687e2008dba6a054262ab233fb3107ab1ad02fe7f9ea0" Feb 21 07:41:17 crc kubenswrapper[4820]: I0221 07:41:17.353466 4820 scope.go:117] "RemoveContainer" containerID="9e6cbdcf98073623c42ebc08a3a9244293f57b950c05c7f4d4a46d72649d7bd4" Feb 21 07:41:24 crc kubenswrapper[4820]: I0221 07:41:24.697209 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:24 crc kubenswrapper[4820]: E0221 07:41:24.698086 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:36 crc kubenswrapper[4820]: I0221 07:41:36.696997 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:36 crc kubenswrapper[4820]: E0221 07:41:36.697756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:41:49 crc kubenswrapper[4820]: I0221 07:41:49.697415 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:41:49 crc kubenswrapper[4820]: E0221 07:41:49.698466 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:01 crc kubenswrapper[4820]: I0221 07:42:01.696766 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:01 crc kubenswrapper[4820]: E0221 07:42:01.697925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:14 crc kubenswrapper[4820]: I0221 07:42:14.697173 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:14 crc kubenswrapper[4820]: E0221 07:42:14.698189 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:29 crc kubenswrapper[4820]: I0221 07:42:29.697218 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:29 crc kubenswrapper[4820]: E0221 07:42:29.699365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:42 crc kubenswrapper[4820]: I0221 07:42:42.697410 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:42 crc kubenswrapper[4820]: E0221 07:42:42.698428 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:42:54 crc kubenswrapper[4820]: I0221 07:42:54.697375 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:42:54 crc kubenswrapper[4820]: E0221 07:42:54.698538 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:07 crc kubenswrapper[4820]: I0221 07:43:07.697492 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:07 crc kubenswrapper[4820]: E0221 07:43:07.698292 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:22 crc kubenswrapper[4820]: I0221 07:43:22.719025 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:22 crc kubenswrapper[4820]: E0221 07:43:22.720356 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:33 crc kubenswrapper[4820]: I0221 07:43:33.696489 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:33 crc kubenswrapper[4820]: E0221 07:43:33.697709 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:45 crc kubenswrapper[4820]: I0221 07:43:45.705470 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:45 crc kubenswrapper[4820]: E0221 07:43:45.706623 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.259882 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260367 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260395 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260428 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-utilities" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260495 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260509 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260521 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="extract-content" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260547 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: E0221 07:43:47.260706 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.260722 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.261000 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f39e73-ac13-401b-8b13-6b43964609cf" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.261044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87b0b35-2855-41c6-be2a-02ec21e4f76c" containerName="registry-server" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.263385 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.281073 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.433967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.434011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.434053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535775 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.535833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.536538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.536556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.577160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"redhat-operators-6ggfn\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:47 crc kubenswrapper[4820]: I0221 07:43:47.595468 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.095598 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457532 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f" exitCode=0 Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f"} Feb 21 07:43:48 crc kubenswrapper[4820]: I0221 07:43:48.457608 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerStarted","Data":"ed32db600b8711ac0a13c62d82b2a4ab2eac4d7bb3074bcdeca00cedd2562296"} Feb 21 07:43:50 crc kubenswrapper[4820]: I0221 07:43:50.473603 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9" exitCode=0 Feb 21 07:43:50 crc kubenswrapper[4820]: I0221 07:43:50.473734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9"} Feb 21 07:43:51 crc kubenswrapper[4820]: I0221 07:43:51.484481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerStarted","Data":"6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf"} Feb 21 07:43:51 crc kubenswrapper[4820]: I0221 07:43:51.514097 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ggfn" podStartSLOduration=2.100494225 podStartE2EDuration="4.514073671s" podCreationTimestamp="2026-02-21 07:43:47 +0000 UTC" firstStartedPulling="2026-02-21 07:43:48.45904251 +0000 UTC m=+3403.492126708" lastFinishedPulling="2026-02-21 07:43:50.872621946 +0000 UTC m=+3405.905706154" observedRunningTime="2026-02-21 07:43:51.509338893 +0000 UTC m=+3406.542423111" watchObservedRunningTime="2026-02-21 07:43:51.514073671 +0000 UTC m=+3406.547157899" Feb 21 07:43:56 crc kubenswrapper[4820]: I0221 07:43:56.696211 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:43:56 crc kubenswrapper[4820]: E0221 07:43:56.696756 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.595621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.595980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:57 crc kubenswrapper[4820]: I0221 07:43:57.637119 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:58 crc kubenswrapper[4820]: I0221 07:43:58.577371 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:43:58 crc kubenswrapper[4820]: I0221 07:43:58.639699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:00 crc kubenswrapper[4820]: I0221 07:44:00.552407 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ggfn" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" containerID="cri-o://6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" gracePeriod=2 Feb 21 07:44:01 crc kubenswrapper[4820]: I0221 07:44:01.571192 4820 generic.go:334] "Generic (PLEG): container finished" podID="59ff409a-d483-413d-8549-862ec2f9da1a" containerID="6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" exitCode=0 Feb 21 07:44:01 crc kubenswrapper[4820]: I0221 07:44:01.571308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf"} Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.172150 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.307898 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") pod \"59ff409a-d483-413d-8549-862ec2f9da1a\" (UID: \"59ff409a-d483-413d-8549-862ec2f9da1a\") " Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.308567 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities" (OuterVolumeSpecName: "utilities") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.314522 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28" (OuterVolumeSpecName: "kube-api-access-fhd28") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "kube-api-access-fhd28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.409575 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhd28\" (UniqueName: \"kubernetes.io/projected/59ff409a-d483-413d-8549-862ec2f9da1a-kube-api-access-fhd28\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.409623 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.456453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59ff409a-d483-413d-8549-862ec2f9da1a" (UID: "59ff409a-d483-413d-8549-862ec2f9da1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.510765 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59ff409a-d483-413d-8549-862ec2f9da1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586532 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ggfn" event={"ID":"59ff409a-d483-413d-8549-862ec2f9da1a","Type":"ContainerDied","Data":"ed32db600b8711ac0a13c62d82b2a4ab2eac4d7bb3074bcdeca00cedd2562296"} Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586604 4820 scope.go:117] "RemoveContainer" containerID="6eb19fdc65388d69c448ecb1303b36ef286f4072c66701c59cc2c0bc61fca3bf" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.586629 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ggfn" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.619315 4820 scope.go:117] "RemoveContainer" containerID="82b50520e8632e5dc0a42266b5504d8c61052d84efbaef88fb9a0953f51f4fc9" Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.638567 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.645769 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ggfn"] Feb 21 07:44:02 crc kubenswrapper[4820]: I0221 07:44:02.667883 4820 scope.go:117] "RemoveContainer" containerID="77b047163f233d4ace28a113a119d191feb8fb1886fc5f474ac7fe63f3a20b7f" Feb 21 07:44:03 crc kubenswrapper[4820]: I0221 07:44:03.706563 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" path="/var/lib/kubelet/pods/59ff409a-d483-413d-8549-862ec2f9da1a/volumes" Feb 21 07:44:11 crc kubenswrapper[4820]: I0221 07:44:11.696866 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:44:11 crc kubenswrapper[4820]: E0221 07:44:11.697607 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:44:22 crc kubenswrapper[4820]: I0221 07:44:22.697191 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:44:23 crc kubenswrapper[4820]: I0221 07:44:23.765089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.168583 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169550 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169590 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-utilities" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169600 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-utilities" Feb 21 07:45:00 crc kubenswrapper[4820]: E0221 07:45:00.169613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-content" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169624 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="extract-content" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.169844 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ff409a-d483-413d-8549-862ec2f9da1a" containerName="registry-server" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.170572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.173638 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.173952 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.182168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299722 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.299970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401405 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.401489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.403966 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.412646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.420015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"collect-profiles-29527665-bchpb\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.498328 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:00 crc kubenswrapper[4820]: I0221 07:45:00.764996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 07:45:00 crc kubenswrapper[4820]: W0221 07:45:00.773165 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc522f8d_0981_40c6_a17f_c5517c78a9cd.slice/crio-9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993 WatchSource:0}: Error finding container 9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993: Status 404 returned error can't find the container with id 9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993 Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.093866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerStarted","Data":"5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc"} Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.094072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerStarted","Data":"9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993"} Feb 21 07:45:01 crc kubenswrapper[4820]: I0221 07:45:01.109940 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" podStartSLOduration=1.109920052 podStartE2EDuration="1.109920052s" podCreationTimestamp="2026-02-21 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 07:45:01.109023879 +0000 UTC m=+3476.142108087" watchObservedRunningTime="2026-02-21 07:45:01.109920052 +0000 UTC m=+3476.143004250" Feb 21 07:45:02 crc kubenswrapper[4820]: I0221 07:45:02.104089 4820 generic.go:334] "Generic (PLEG): container finished" podID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerID="5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc" exitCode=0 Feb 21 07:45:02 crc kubenswrapper[4820]: I0221 07:45:02.104151 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerDied","Data":"5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc"} Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.488916 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549555 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.549734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") pod \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\" (UID: \"bc522f8d-0981-40c6-a17f-c5517c78a9cd\") " Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.550308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.554334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.556476 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8" (OuterVolumeSpecName: "kube-api-access-jv7b8") pod "bc522f8d-0981-40c6-a17f-c5517c78a9cd" (UID: "bc522f8d-0981-40c6-a17f-c5517c78a9cd"). InnerVolumeSpecName "kube-api-access-jv7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651458 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv7b8\" (UniqueName: \"kubernetes.io/projected/bc522f8d-0981-40c6-a17f-c5517c78a9cd-kube-api-access-jv7b8\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651486 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc522f8d-0981-40c6-a17f-c5517c78a9cd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:03 crc kubenswrapper[4820]: I0221 07:45:03.651496 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc522f8d-0981-40c6-a17f-c5517c78a9cd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123403 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" event={"ID":"bc522f8d-0981-40c6-a17f-c5517c78a9cd","Type":"ContainerDied","Data":"9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993"} Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123717 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9512df7df66a85a1af9a794299abd8d0407cba3e16f3b3a71fe90158d7aa6993" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.123518 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb" Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.603454 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:45:04 crc kubenswrapper[4820]: I0221 07:45:04.613612 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527620-dh5dn"] Feb 21 07:45:05 crc kubenswrapper[4820]: I0221 07:45:05.707954 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54597218-e332-4423-adc0-b4be2977a4ce" path="/var/lib/kubelet/pods/54597218-e332-4423-adc0-b4be2977a4ce/volumes" Feb 21 07:45:17 crc kubenswrapper[4820]: I0221 07:45:17.508942 4820 scope.go:117] "RemoveContainer" containerID="5520f9baaf36da34f01d9939d3174e22d3ad84830852ce6d62998744f623b758" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.038462 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:50 crc kubenswrapper[4820]: E0221 07:45:50.039738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.039762 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.040017 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" containerName="collect-profiles" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.041878 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.043124 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.105757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.105886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.107407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209311 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-catalog-content\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.209824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5791a2a-f861-4564-b560-cef4e1d2b529-utilities\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.236900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxfh\" (UniqueName: \"kubernetes.io/projected/d5791a2a-f861-4564-b560-cef4e1d2b529-kube-api-access-rlxfh\") pod \"community-operators-h8lnd\" (UID: \"d5791a2a-f861-4564-b560-cef4e1d2b529\") " pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.392966 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:45:50 crc kubenswrapper[4820]: I0221 07:45:50.888856 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.539761 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5791a2a-f861-4564-b560-cef4e1d2b529" containerID="ba2fd74f17ff184d3f71a915ff4a6b54ee3d5b98b067962e92939d708ce2f3cd" exitCode=0 Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.540152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerDied","Data":"ba2fd74f17ff184d3f71a915ff4a6b54ee3d5b98b067962e92939d708ce2f3cd"} Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.540191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerStarted","Data":"8e0f52b9f194ee392be24b10a7a304deaf167556f84f346a03c94b2df7969ae4"} Feb 21 07:45:51 crc kubenswrapper[4820]: I0221 07:45:51.542815 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:45:56 crc kubenswrapper[4820]: I0221 07:45:56.583818 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5791a2a-f861-4564-b560-cef4e1d2b529" containerID="2d90eb386922e2185aefb6c54db664e6e352a006fb1ba2947fe80ebe5f7e2919" exitCode=0 Feb 21 07:45:56 crc kubenswrapper[4820]: I0221 07:45:56.583922 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerDied","Data":"2d90eb386922e2185aefb6c54db664e6e352a006fb1ba2947fe80ebe5f7e2919"} Feb 21 07:45:57 crc kubenswrapper[4820]: I0221 07:45:57.596926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8lnd" event={"ID":"d5791a2a-f861-4564-b560-cef4e1d2b529","Type":"ContainerStarted","Data":"c6d993edb50049b43123e771d9d6a7c32cee5f3eaa89ea0576b281ca5b59a11a"} Feb 21 07:45:57 crc kubenswrapper[4820]: I0221 07:45:57.623819 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8lnd" podStartSLOduration=2.2239967099999998 podStartE2EDuration="7.623792726s" podCreationTimestamp="2026-02-21 07:45:50 +0000 UTC" firstStartedPulling="2026-02-21 07:45:51.542460469 +0000 UTC m=+3526.575544707" lastFinishedPulling="2026-02-21 07:45:56.942256525 +0000 UTC m=+3531.975340723" observedRunningTime="2026-02-21 07:45:57.618899474 +0000 UTC m=+3532.651983682" watchObservedRunningTime="2026-02-21 07:45:57.623792726 +0000 UTC m=+3532.656876934" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.393451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.394163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:00 crc kubenswrapper[4820]: I0221 07:46:00.458716 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.472912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8lnd" Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.561406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8lnd"] Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.604402 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.604702 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rj56" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" containerID="cri-o://562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" gracePeriod=2 Feb 21 07:46:10 crc kubenswrapper[4820]: I0221 07:46:10.992266 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.042874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") pod \"a72aad09-5c42-41f0-9699-9160d1750191\" (UID: \"a72aad09-5c42-41f0-9699-9160d1750191\") " Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.043415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities" (OuterVolumeSpecName: "utilities") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.061471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7" (OuterVolumeSpecName: "kube-api-access-fz5g7") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "kube-api-access-fz5g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.097071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72aad09-5c42-41f0-9699-9160d1750191" (UID: "a72aad09-5c42-41f0-9699-9160d1750191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144360 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144396 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5g7\" (UniqueName: \"kubernetes.io/projected/a72aad09-5c42-41f0-9699-9160d1750191-kube-api-access-fz5g7\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.144405 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72aad09-5c42-41f0-9699-9160d1750191-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.703532 4820 generic.go:334] "Generic (PLEG): container finished" podID="a72aad09-5c42-41f0-9699-9160d1750191" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" exitCode=0 Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.703706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rj56" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rj56" event={"ID":"a72aad09-5c42-41f0-9699-9160d1750191","Type":"ContainerDied","Data":"a241b80262b56f5d048ff4666a6e3d23fdf812bb1aab7c42d8d4b602a3f884d7"} Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.710762 4820 scope.go:117] "RemoveContainer" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.748522 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.752857 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rj56"] Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.754015 4820 scope.go:117] "RemoveContainer" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.771216 4820 scope.go:117] "RemoveContainer" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.790279 4820 scope.go:117] "RemoveContainer" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.790954 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": container with ID starting with 562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e not found: ID does not exist" containerID="562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791027 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e"} err="failed to get container status \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": rpc error: code = NotFound desc = could not find container \"562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e\": container with ID starting with 562150274b83fa2ba485e813d2cced3731c6af828eeb66dce6c76e328aafd31e not found: ID does not exist" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791072 4820 scope.go:117] "RemoveContainer" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.791455 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": container with ID starting with d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766 not found: ID does not exist" containerID="d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791505 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766"} err="failed to get container status \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": rpc error: code = NotFound desc = could not find container \"d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766\": container with ID starting with d6491e3567b08a6c8a13e1eb97dba97862e39d41734ca415bbfe0652baee7766 not found: ID does not exist" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791527 4820 scope.go:117] "RemoveContainer" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: E0221 07:46:11.791941 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": container with ID starting with 0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6 not found: ID does not exist" containerID="0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6" Feb 21 07:46:11 crc kubenswrapper[4820]: I0221 07:46:11.791979 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6"} err="failed to get container status \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": rpc error: code = NotFound desc = could not find container \"0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6\": container with ID starting with 0f1356ac4ee8cad420423de4412cc29dd368ab7dbc75d97c4e02224b8c17fac6 not found: ID does not exist" Feb 21 07:46:13 crc kubenswrapper[4820]: I0221 07:46:13.710872 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72aad09-5c42-41f0-9699-9160d1750191" path="/var/lib/kubelet/pods/a72aad09-5c42-41f0-9699-9160d1750191/volumes" Feb 21 07:46:43 crc kubenswrapper[4820]: I0221 07:46:43.816724 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:46:43 crc kubenswrapper[4820]: I0221 07:46:43.817221 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:13 crc kubenswrapper[4820]: I0221 07:47:13.816148 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:47:13 crc kubenswrapper[4820]: I0221 07:47:13.816757 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.817010 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.819652 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.819749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.820686 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:47:43 crc kubenswrapper[4820]: I0221 07:47:43.820800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" gracePeriod=600 Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455145 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" exitCode=0 Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5"} Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} Feb 21 07:47:44 crc kubenswrapper[4820]: I0221 07:47:44.455816 4820 scope.go:117] "RemoveContainer" containerID="f285317478c5033d1adfdff9c08e01ee76536e4b392b0a1765d02ea5287e6973" Feb 21 07:50:13 crc kubenswrapper[4820]: I0221 07:50:13.816561 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:50:13 crc kubenswrapper[4820]: I0221 07:50:13.817162 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:50:43 crc kubenswrapper[4820]: I0221 07:50:43.817036 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:50:43 crc kubenswrapper[4820]: I0221 07:50:43.817742 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.085073 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087399 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-content" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087437 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-content" Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087502 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-utilities" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087520 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="extract-utilities" Feb 21 07:51:06 crc kubenswrapper[4820]: E0221 07:51:06.087560 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087576 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.087913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72aad09-5c42-41f0-9699-9160d1750191" containerName="registry-server" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.090618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.093675 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.108634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.108745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.109024 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.214886 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.214980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.215027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.215554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.216194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.256618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"certified-operators-nvld4\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.429863 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:06 crc kubenswrapper[4820]: I0221 07:51:06.953340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691006 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e" exitCode=0 Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691063 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e"} Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.691144 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerStarted","Data":"5aeca919d8786948187d2de5319582612e4556d145b9674f4edcf179da89ecc6"} Feb 21 07:51:07 crc kubenswrapper[4820]: I0221 07:51:07.694717 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:51:08 crc kubenswrapper[4820]: I0221 07:51:08.700934 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8" exitCode=0 Feb 21 07:51:08 crc kubenswrapper[4820]: I0221 07:51:08.701148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8"} Feb 21 07:51:09 crc kubenswrapper[4820]: I0221 07:51:09.711190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerStarted","Data":"03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612"} Feb 21 07:51:09 crc kubenswrapper[4820]: I0221 07:51:09.735706 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nvld4" podStartSLOduration=2.285887618 podStartE2EDuration="3.735685086s" podCreationTimestamp="2026-02-21 07:51:06 +0000 UTC" firstStartedPulling="2026-02-21 07:51:07.694422904 +0000 UTC m=+3842.727507112" lastFinishedPulling="2026-02-21 07:51:09.144220372 +0000 UTC m=+3844.177304580" observedRunningTime="2026-02-21 07:51:09.729513169 +0000 UTC m=+3844.762597367" watchObservedRunningTime="2026-02-21 07:51:09.735685086 +0000 UTC m=+3844.768769284" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.816347 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.816963 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817020 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817829 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:51:13 crc kubenswrapper[4820]: I0221 07:51:13.817924 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" gracePeriod=600 Feb 21 07:51:14 crc kubenswrapper[4820]: E0221 07:51:14.455919 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746676 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" exitCode=0 Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8"} Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.746745 4820 scope.go:117] "RemoveContainer" containerID="ac5595d8b4c98934f854ed3d9927562ebe603dd94584fa7e1c32a88718f68ed5" Feb 21 07:51:14 crc kubenswrapper[4820]: I0221 07:51:14.747170 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:14 crc kubenswrapper[4820]: E0221 07:51:14.747402 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.429949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.430814 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.479691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.821647 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:16 crc kubenswrapper[4820]: I0221 07:51:16.872417 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:18 crc kubenswrapper[4820]: I0221 07:51:18.776742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nvld4" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" containerID="cri-o://03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" gracePeriod=2 Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.798090 4820 generic.go:334] "Generic (PLEG): container finished" podID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerID="03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" exitCode=0 Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.798142 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612"} Feb 21 07:51:19 crc kubenswrapper[4820]: I0221 07:51:19.877612 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.040905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.041100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.041141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") pod \"37061a92-ef34-4c34-a0c0-acb8ca735d72\" (UID: \"37061a92-ef34-4c34-a0c0-acb8ca735d72\") " Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.042917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities" (OuterVolumeSpecName: "utilities") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.047220 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94" (OuterVolumeSpecName: "kube-api-access-6ck94") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "kube-api-access-6ck94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.109978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37061a92-ef34-4c34-a0c0-acb8ca735d72" (UID: "37061a92-ef34-4c34-a0c0-acb8ca735d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142899 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142939 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061a92-ef34-4c34-a0c0-acb8ca735d72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.142957 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ck94\" (UniqueName: \"kubernetes.io/projected/37061a92-ef34-4c34-a0c0-acb8ca735d72-kube-api-access-6ck94\") on node \"crc\" DevicePath \"\"" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvld4" event={"ID":"37061a92-ef34-4c34-a0c0-acb8ca735d72","Type":"ContainerDied","Data":"5aeca919d8786948187d2de5319582612e4556d145b9674f4edcf179da89ecc6"} Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815777 4820 scope.go:117] "RemoveContainer" containerID="03268773776ed7972cff628196c80e7463cb141344b4d385ae506dfa501bd612" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.815878 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvld4" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.833946 4820 scope.go:117] "RemoveContainer" containerID="3ef335f2a7ef2b50ea5743bd72dc7e2b76f53ee6d270222f4e60aaf4f0dcd3f8" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.871131 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.872582 4820 scope.go:117] "RemoveContainer" containerID="ccfedcae81749135f20d9ed7246561c1f38a7a2e8ac32cd0806884306ac4ea4e" Feb 21 07:51:20 crc kubenswrapper[4820]: I0221 07:51:20.881511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nvld4"] Feb 21 07:51:21 crc kubenswrapper[4820]: I0221 07:51:21.713129 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" path="/var/lib/kubelet/pods/37061a92-ef34-4c34-a0c0-acb8ca735d72/volumes" Feb 21 07:51:25 crc kubenswrapper[4820]: I0221 07:51:25.718175 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:25 crc kubenswrapper[4820]: E0221 07:51:25.721089 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:37 crc kubenswrapper[4820]: I0221 07:51:37.698586 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:37 crc kubenswrapper[4820]: E0221 07:51:37.699526 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:51:50 crc kubenswrapper[4820]: I0221 07:51:50.697619 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:51:50 crc kubenswrapper[4820]: E0221 07:51:50.698485 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:02 crc kubenswrapper[4820]: I0221 07:52:02.697591 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:02 crc kubenswrapper[4820]: E0221 07:52:02.699692 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:16 crc kubenswrapper[4820]: I0221 07:52:16.696815 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:16 crc kubenswrapper[4820]: E0221 07:52:16.697889 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:28 crc kubenswrapper[4820]: I0221 07:52:28.696428 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:28 crc kubenswrapper[4820]: E0221 07:52:28.697122 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:39 crc kubenswrapper[4820]: I0221 07:52:39.697127 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:39 crc kubenswrapper[4820]: E0221 07:52:39.697958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:52:51 crc kubenswrapper[4820]: I0221 07:52:51.697459 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:52:51 crc kubenswrapper[4820]: E0221 07:52:51.698538 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:04 crc kubenswrapper[4820]: I0221 07:53:04.697130 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:04 crc kubenswrapper[4820]: E0221 07:53:04.698220 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:15 crc kubenswrapper[4820]: I0221 07:53:15.709001 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:15 crc kubenswrapper[4820]: E0221 07:53:15.710895 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:29 crc kubenswrapper[4820]: I0221 07:53:29.698215 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:29 crc kubenswrapper[4820]: E0221 07:53:29.698932 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:41 crc kubenswrapper[4820]: I0221 07:53:41.697225 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:41 crc kubenswrapper[4820]: E0221 07:53:41.702452 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:53:54 crc kubenswrapper[4820]: I0221 07:53:54.697117 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:53:54 crc kubenswrapper[4820]: E0221 07:53:54.697988 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:08 crc kubenswrapper[4820]: I0221 07:54:08.696708 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:08 crc kubenswrapper[4820]: E0221 07:54:08.697365 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.863558 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864119 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864134 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864145 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-content" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-content" Feb 21 07:54:15 crc kubenswrapper[4820]: E0221 07:54:15.864161 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-utilities" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="extract-utilities" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.864313 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="37061a92-ef34-4c34-a0c0-acb8ca735d72" containerName="registry-server" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.865205 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:15 crc kubenswrapper[4820]: I0221 07:54:15.909173 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.044498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145605 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.145985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.146006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.164163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"redhat-operators-5qh6v\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.210840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:16 crc kubenswrapper[4820]: I0221 07:54:16.630714 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271559 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d" exitCode=0 Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d"} Feb 21 07:54:17 crc kubenswrapper[4820]: I0221 07:54:17.271842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"e47be5b63a22e1b81079b06c8168105121b628bdf4c0ea2144497c900368850f"} Feb 21 07:54:18 crc kubenswrapper[4820]: I0221 07:54:18.286038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8"} Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.293270 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8" exitCode=0 Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.293314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8"} Feb 21 07:54:19 crc kubenswrapper[4820]: I0221 07:54:19.697312 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:19 crc kubenswrapper[4820]: E0221 07:54:19.697634 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:20 crc kubenswrapper[4820]: I0221 07:54:20.304220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerStarted","Data":"0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f"} Feb 21 07:54:26 crc kubenswrapper[4820]: I0221 07:54:26.211439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:26 crc kubenswrapper[4820]: I0221 07:54:26.212019 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:27 crc kubenswrapper[4820]: I0221 07:54:27.259161 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qh6v" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" probeResult="failure" output=< Feb 21 07:54:27 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 07:54:27 crc kubenswrapper[4820]: > Feb 21 07:54:32 crc kubenswrapper[4820]: I0221 07:54:32.697522 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:32 crc kubenswrapper[4820]: E0221 07:54:32.698565 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.283866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.305531 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qh6v" podStartSLOduration=18.737537006 podStartE2EDuration="21.305516812s" podCreationTimestamp="2026-02-21 07:54:15 +0000 UTC" firstStartedPulling="2026-02-21 07:54:17.273389312 +0000 UTC m=+4032.306473500" lastFinishedPulling="2026-02-21 07:54:19.841369088 +0000 UTC m=+4034.874453306" observedRunningTime="2026-02-21 07:54:20.332533269 +0000 UTC m=+4035.365617527" watchObservedRunningTime="2026-02-21 07:54:36.305516812 +0000 UTC m=+4051.338601010" Feb 21 07:54:36 crc kubenswrapper[4820]: I0221 07:54:36.335133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.053080 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.062796 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.088203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.189854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.189931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.190042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.291593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.292429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.292445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.312254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"redhat-marketplace-fvqzv\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.397621 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:41 crc kubenswrapper[4820]: I0221 07:54:41.834830 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457616 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" exitCode=0 Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7"} Feb 21 07:54:42 crc kubenswrapper[4820]: I0221 07:54:42.457967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerStarted","Data":"cab574a64fae6e111005a38e735d31a4069face554eedb6727fd214b99556119"} Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.467903 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" exitCode=0 Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.468098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66"} Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.841745 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:43 crc kubenswrapper[4820]: I0221 07:54:43.842014 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qh6v" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" containerID="cri-o://0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" gracePeriod=2 Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.483059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerStarted","Data":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.487652 4820 generic.go:334] "Generic (PLEG): container finished" podID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerID="0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" exitCode=0 Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.487700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f"} Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.653935 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.688946 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvqzv" podStartSLOduration=2.290806141 podStartE2EDuration="3.688915258s" podCreationTimestamp="2026-02-21 07:54:41 +0000 UTC" firstStartedPulling="2026-02-21 07:54:42.459468149 +0000 UTC m=+4057.492552357" lastFinishedPulling="2026-02-21 07:54:43.857577236 +0000 UTC m=+4058.890661474" observedRunningTime="2026-02-21 07:54:44.518612257 +0000 UTC m=+4059.551696465" watchObservedRunningTime="2026-02-21 07:54:44.688915258 +0000 UTC m=+4059.721999496" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.742975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.743069 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.743184 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") pod \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\" (UID: \"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0\") " Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.744122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities" (OuterVolumeSpecName: "utilities") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.749203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn" (OuterVolumeSpecName: "kube-api-access-9srpn") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "kube-api-access-9srpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.844468 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.844502 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9srpn\" (UniqueName: \"kubernetes.io/projected/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-kube-api-access-9srpn\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.902585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" (UID: "406e270d-46a1-46ad-9cdf-2c4edc1c2eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:44 crc kubenswrapper[4820]: I0221 07:54:44.945439 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.501589 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qh6v" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.501591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qh6v" event={"ID":"406e270d-46a1-46ad-9cdf-2c4edc1c2eb0","Type":"ContainerDied","Data":"e47be5b63a22e1b81079b06c8168105121b628bdf4c0ea2144497c900368850f"} Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.502129 4820 scope.go:117] "RemoveContainer" containerID="0942a51db64d709d78dadc64819bf491a188d02b79dd213a868b382714d64d0f" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.530981 4820 scope.go:117] "RemoveContainer" containerID="9ea07a6df45e5af9d80dc98dc60ca5d5e8bf8a06521c6ff33d9f07118d3e86b8" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.557889 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.587414 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qh6v"] Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.596664 4820 scope.go:117] "RemoveContainer" containerID="551c7020b75f78fafdc92fa0996fa48ac32fef5cdae5fc05dac1a2ef77fc144d" Feb 21 07:54:45 crc kubenswrapper[4820]: I0221 07:54:45.707784 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" path="/var/lib/kubelet/pods/406e270d-46a1-46ad-9cdf-2c4edc1c2eb0/volumes" Feb 21 07:54:47 crc kubenswrapper[4820]: I0221 07:54:47.696872 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:47 crc kubenswrapper[4820]: E0221 07:54:47.697195 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.398352 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.398799 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.477204 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:51 crc kubenswrapper[4820]: I0221 07:54:51.595438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:52 crc kubenswrapper[4820]: I0221 07:54:52.043355 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.568604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvqzv" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" containerID="cri-o://728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" gracePeriod=2 Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.930264 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.975291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.975466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") pod \"89ea0d90-b48d-4619-930e-e2e56101d066\" (UID: \"89ea0d90-b48d-4619-930e-e2e56101d066\") " Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities" (OuterVolumeSpecName: "utilities") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.976938 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.981038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx" (OuterVolumeSpecName: "kube-api-access-7zlzx") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "kube-api-access-7zlzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:54:53 crc kubenswrapper[4820]: I0221 07:54:53.998545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89ea0d90-b48d-4619-930e-e2e56101d066" (UID: "89ea0d90-b48d-4619-930e-e2e56101d066"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.078620 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zlzx\" (UniqueName: \"kubernetes.io/projected/89ea0d90-b48d-4619-930e-e2e56101d066-kube-api-access-7zlzx\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.078670 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ea0d90-b48d-4619-930e-e2e56101d066-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576608 4820 generic.go:334] "Generic (PLEG): container finished" podID="89ea0d90-b48d-4619-930e-e2e56101d066" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" exitCode=0 Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576639 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576684 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvqzv" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576706 4820 scope.go:117] "RemoveContainer" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.576690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvqzv" event={"ID":"89ea0d90-b48d-4619-930e-e2e56101d066","Type":"ContainerDied","Data":"cab574a64fae6e111005a38e735d31a4069face554eedb6727fd214b99556119"} Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.609781 4820 scope.go:117] "RemoveContainer" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.613523 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.618574 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvqzv"] Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.644857 4820 scope.go:117] "RemoveContainer" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.674883 4820 scope.go:117] "RemoveContainer" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.675552 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": container with ID starting with 728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a not found: ID does not exist" containerID="728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.675601 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a"} err="failed to get container status \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": rpc error: code = NotFound desc = could not find container \"728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a\": container with ID starting with 728906b9f7bcf2cbb545ea6130064eec6bef6bdc431afbffd3b3c3cb0a14984a not found: ID does not exist" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.675638 4820 scope.go:117] "RemoveContainer" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.675977 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": container with ID starting with 57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66 not found: ID does not exist" containerID="57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66"} err="failed to get container status \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": rpc error: code = NotFound desc = could not find container \"57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66\": container with ID starting with 57db715441c66e2cec0bb1dc7348d1eff3dcd6ac96599a09cd9729ad89e73e66 not found: ID does not exist" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676049 4820 scope.go:117] "RemoveContainer" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: E0221 07:54:54.676414 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": container with ID starting with c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7 not found: ID does not exist" containerID="c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7" Feb 21 07:54:54 crc kubenswrapper[4820]: I0221 07:54:54.676572 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7"} err="failed to get container status \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": rpc error: code = NotFound desc = could not find container \"c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7\": container with ID starting with c2cdeec8d7a7c951fd9e88d09342555ff00bd17d967cb20b82f49be06045f4a7 not found: ID does not exist" Feb 21 07:54:55 crc kubenswrapper[4820]: I0221 07:54:55.714801 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" path="/var/lib/kubelet/pods/89ea0d90-b48d-4619-930e-e2e56101d066/volumes" Feb 21 07:54:58 crc kubenswrapper[4820]: I0221 07:54:58.696829 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:54:58 crc kubenswrapper[4820]: E0221 07:54:58.697125 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:11 crc kubenswrapper[4820]: I0221 07:55:11.697958 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:11 crc kubenswrapper[4820]: E0221 07:55:11.700065 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:22 crc kubenswrapper[4820]: I0221 07:55:22.697421 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:22 crc kubenswrapper[4820]: E0221 07:55:22.698036 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:36 crc kubenswrapper[4820]: I0221 07:55:36.697311 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:36 crc kubenswrapper[4820]: E0221 07:55:36.699100 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:55:51 crc kubenswrapper[4820]: I0221 07:55:51.696866 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:55:51 crc kubenswrapper[4820]: E0221 07:55:51.698079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:56:06 crc kubenswrapper[4820]: I0221 07:56:06.696751 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:56:06 crc kubenswrapper[4820]: E0221 07:56:06.697572 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.865411 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867033 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867132 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867198 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867278 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867404 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867530 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867592 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867681 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="extract-utilities" Feb 21 07:56:15 crc kubenswrapper[4820]: E0221 07:56:15.867765 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.867836 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="extract-content" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.868075 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="406e270d-46a1-46ad-9cdf-2c4edc1c2eb0" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.868175 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ea0d90-b48d-4619-930e-e2e56101d066" containerName="registry-server" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.869383 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.885902 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992721 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:15 crc kubenswrapper[4820]: I0221 07:56:15.992805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.094625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.095181 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.095183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.122723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"community-operators-9b2lv\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.206549 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:16 crc kubenswrapper[4820]: I0221 07:56:16.702592 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224069 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" exitCode=0 Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066"} Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.224140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerStarted","Data":"dc86ced4a4e068c48caf8132e6efbb20cfa16413fa778776951f2112849a0a68"} Feb 21 07:56:17 crc kubenswrapper[4820]: I0221 07:56:17.226216 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 07:56:18 crc kubenswrapper[4820]: I0221 07:56:18.697619 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.240136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.241841 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" exitCode=0 Feb 21 07:56:19 crc kubenswrapper[4820]: I0221 07:56:19.241886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e"} Feb 21 07:56:20 crc kubenswrapper[4820]: I0221 07:56:20.257832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerStarted","Data":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} Feb 21 07:56:20 crc kubenswrapper[4820]: I0221 07:56:20.287991 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9b2lv" podStartSLOduration=2.871190908 podStartE2EDuration="5.287966998s" podCreationTimestamp="2026-02-21 07:56:15 +0000 UTC" firstStartedPulling="2026-02-21 07:56:17.225879302 +0000 UTC m=+4152.258963510" lastFinishedPulling="2026-02-21 07:56:19.642655402 +0000 UTC m=+4154.675739600" observedRunningTime="2026-02-21 07:56:20.280571968 +0000 UTC m=+4155.313656196" watchObservedRunningTime="2026-02-21 07:56:20.287966998 +0000 UTC m=+4155.321051216" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.207553 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.207868 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.250979 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.354310 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:26 crc kubenswrapper[4820]: I0221 07:56:26.651262 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.319807 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9b2lv" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" containerID="cri-o://a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" gracePeriod=2 Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.773813 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.974990 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.975087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.975156 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") pod \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\" (UID: \"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1\") " Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.976893 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities" (OuterVolumeSpecName: "utilities") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:56:28 crc kubenswrapper[4820]: I0221 07:56:28.980693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw" (OuterVolumeSpecName: "kube-api-access-zdzrw") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "kube-api-access-zdzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.076330 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.077010 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdzrw\" (UniqueName: \"kubernetes.io/projected/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-kube-api-access-zdzrw\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330532 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" exitCode=0 Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330583 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330613 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b2lv" event={"ID":"9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1","Type":"ContainerDied","Data":"dc86ced4a4e068c48caf8132e6efbb20cfa16413fa778776951f2112849a0a68"} Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330633 4820 scope.go:117] "RemoveContainer" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.330780 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b2lv" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.351198 4820 scope.go:117] "RemoveContainer" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.370191 4820 scope.go:117] "RemoveContainer" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.398830 4820 scope.go:117] "RemoveContainer" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.399220 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": container with ID starting with a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb not found: ID does not exist" containerID="a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399281 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb"} err="failed to get container status \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": rpc error: code = NotFound desc = could not find container \"a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb\": container with ID starting with a83df1f8ef0294f7541c1d8f75a566748e9800e475269e07bf6011ae3a284dcb not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399305 4820 scope.go:117] "RemoveContainer" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.399799 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": container with ID starting with 7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e not found: ID does not exist" containerID="7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399826 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e"} err="failed to get container status \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": rpc error: code = NotFound desc = could not find container \"7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e\": container with ID starting with 7397563a5daf8920f6e20f451332a8a09ea22e444b7ca371a7876df81010e44e not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.399844 4820 scope.go:117] "RemoveContainer" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: E0221 07:56:29.400299 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": container with ID starting with ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066 not found: ID does not exist" containerID="ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.400340 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066"} err="failed to get container status \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": rpc error: code = NotFound desc = could not find container \"ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066\": container with ID starting with ef32ad11ee76afd376dcfd4eb1bfa53a4167b56472eb1272d48e31ddc7538066 not found: ID does not exist" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.623289 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" (UID: "9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.673536 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.681228 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9b2lv"] Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.685832 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 07:56:29 crc kubenswrapper[4820]: I0221 07:56:29.705685 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" path="/var/lib/kubelet/pods/9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1/volumes" Feb 21 07:58:43 crc kubenswrapper[4820]: I0221 07:58:43.815682 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:58:43 crc kubenswrapper[4820]: I0221 07:58:43.816181 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:13 crc kubenswrapper[4820]: I0221 07:59:13.816406 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:59:13 crc kubenswrapper[4820]: I0221 07:59:13.817157 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816212 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816898 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.816975 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.818265 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 07:59:43 crc kubenswrapper[4820]: I0221 07:59:43.818502 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" gracePeriod=600 Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875206 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" exitCode=0 Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b"} Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} Feb 21 07:59:44 crc kubenswrapper[4820]: I0221 07:59:44.875758 4820 scope.go:117] "RemoveContainer" containerID="a1c1bc4db54523e5c0edd596e595956910b6b45be82070c004697f48d61cd6e8" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.192330 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193187 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-utilities" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193203 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-utilities" Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-content" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="extract-content" Feb 21 08:00:00 crc kubenswrapper[4820]: E0221 08:00:00.193272 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193281 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.193408 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd6311a-7b4f-4bf4-808b-c949cdbbcdc1" containerName="registry-server" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.194021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.198041 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.201433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.204294 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.306897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.307036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.307941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.409507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.409937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.410003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.411032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.418049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.446417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"collect-profiles-29527680-vvkw5\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.541221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.961118 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:00:00 crc kubenswrapper[4820]: W0221 08:00:00.964415 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053b4929_8cfe_48ef_b6ab_d57fa3eeebc1.slice/crio-fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526 WatchSource:0}: Error finding container fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526: Status 404 returned error can't find the container with id fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526 Feb 21 08:00:00 crc kubenswrapper[4820]: I0221 08:00:00.993283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerStarted","Data":"fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526"} Feb 21 08:00:02 crc kubenswrapper[4820]: I0221 08:00:02.000433 4820 generic.go:334] "Generic (PLEG): container finished" podID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerID="2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f" exitCode=0 Feb 21 08:00:02 crc kubenswrapper[4820]: I0221 08:00:02.000518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerDied","Data":"2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f"} Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.268760 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") pod \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\" (UID: \"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1\") " Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.356985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume" (OuterVolumeSpecName: "config-volume") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.360967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.361060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t" (OuterVolumeSpecName: "kube-api-access-q8c2t") pod "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" (UID: "053b4929-8cfe-48ef-b6ab-d57fa3eeebc1"). InnerVolumeSpecName "kube-api-access-q8c2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457365 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457396 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:03 crc kubenswrapper[4820]: I0221 08:00:03.457407 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8c2t\" (UniqueName: \"kubernetes.io/projected/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1-kube-api-access-q8c2t\") on node \"crc\" DevicePath \"\"" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" event={"ID":"053b4929-8cfe-48ef-b6ab-d57fa3eeebc1","Type":"ContainerDied","Data":"fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526"} Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014859 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf4137d62ae2ed6169a531d457bf81f0f8620f27e990dac138bcc1916476526" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.014874 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5" Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.341869 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 08:00:04 crc kubenswrapper[4820]: I0221 08:00:04.348048 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527635-gddrt"] Feb 21 08:00:05 crc kubenswrapper[4820]: I0221 08:00:05.703882 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbbeb29-093d-424c-aa21-a711f564f201" path="/var/lib/kubelet/pods/ebbbeb29-093d-424c-aa21-a711f564f201/volumes" Feb 21 08:00:17 crc kubenswrapper[4820]: I0221 08:00:17.881971 4820 scope.go:117] "RemoveContainer" containerID="a723e81e08af1fbe61c3aa1a83712ca47314287f719a875048e1f08fe12358d0" Feb 21 08:01:19 crc kubenswrapper[4820]: I0221 08:01:19.921186 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 08:01:19 crc kubenswrapper[4820]: I0221 08:01:19.926119 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-p4pxl"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048294 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: E0221 08:01:20.048677 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048697 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.048819 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" containerName="collect-profiles" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.049290 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.053234 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.054263 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.055120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.058494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.059048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.076915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.076956 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.077013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.178936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.195711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"crc-storage-crc-kplkz\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.371748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.788262 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:20 crc kubenswrapper[4820]: I0221 08:01:20.795064 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.636891 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerStarted","Data":"abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106"} Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.636948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerStarted","Data":"e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0"} Feb 21 08:01:21 crc kubenswrapper[4820]: I0221 08:01:21.708432 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c764255-4b53-476b-ad40-4bd38c76f92c" path="/var/lib/kubelet/pods/3c764255-4b53-476b-ad40-4bd38c76f92c/volumes" Feb 21 08:01:22 crc kubenswrapper[4820]: I0221 08:01:22.646488 4820 generic.go:334] "Generic (PLEG): container finished" podID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerID="abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106" exitCode=0 Feb 21 08:01:22 crc kubenswrapper[4820]: I0221 08:01:22.646526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerDied","Data":"abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106"} Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.894779 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") pod \"16aa1c55-d991-403c-afc5-a7b85d23c010\" (UID: \"16aa1c55-d991-403c-afc5-a7b85d23c010\") " Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.930960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.936131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f" (OuterVolumeSpecName: "kube-api-access-z5d7f") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "kube-api-access-z5d7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:01:23 crc kubenswrapper[4820]: I0221 08:01:23.949782 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "16aa1c55-d991-403c-afc5-a7b85d23c010" (UID: "16aa1c55-d991-403c-afc5-a7b85d23c010"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032657 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5d7f\" (UniqueName: \"kubernetes.io/projected/16aa1c55-d991-403c-afc5-a7b85d23c010-kube-api-access-z5d7f\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032697 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/16aa1c55-d991-403c-afc5-a7b85d23c010-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.032713 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/16aa1c55-d991-403c-afc5-a7b85d23c010-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kplkz" event={"ID":"16aa1c55-d991-403c-afc5-a7b85d23c010","Type":"ContainerDied","Data":"e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0"} Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661877 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44c8913d6409c2f44eaa80f4035e9e410c6bf752da88f9264ce2373fbeb87f0" Feb 21 08:01:24 crc kubenswrapper[4820]: I0221 08:01:24.661882 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kplkz" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.055665 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.061090 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kplkz"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.199873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:26 crc kubenswrapper[4820]: E0221 08:01:26.200210 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.200227 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.200492 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" containerName="storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.201064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.202585 4820 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-45wzb" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.204711 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.204869 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.205607 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.208408 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.263729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.364815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.365184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.365585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.382904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"crc-storage-crc-z77q6\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.522001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:26 crc kubenswrapper[4820]: I0221 08:01:26.791296 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z77q6"] Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.693996 4820 generic.go:334] "Generic (PLEG): container finished" podID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerID="12b4f7c9ed195d13c22453ee4884c11b5fecd881b33616d52b44ca9b032f4b29" exitCode=0 Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.694037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerDied","Data":"12b4f7c9ed195d13c22453ee4884c11b5fecd881b33616d52b44ca9b032f4b29"} Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.694646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerStarted","Data":"ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8"} Feb 21 08:01:27 crc kubenswrapper[4820]: I0221 08:01:27.706570 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16aa1c55-d991-403c-afc5-a7b85d23c010" path="/var/lib/kubelet/pods/16aa1c55-d991-403c-afc5-a7b85d23c010/volumes" Feb 21 08:01:28 crc kubenswrapper[4820]: I0221 08:01:28.966347 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.099911 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100334 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100422 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") pod \"3425985d-d02a-4566-b1c3-ae15f48de2a0\" (UID: \"3425985d-d02a-4566-b1c3-ae15f48de2a0\") " Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100436 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.100763 4820 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3425985d-d02a-4566-b1c3-ae15f48de2a0-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.105447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z" (OuterVolumeSpecName: "kube-api-access-5wn4z") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "kube-api-access-5wn4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.117164 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3425985d-d02a-4566-b1c3-ae15f48de2a0" (UID: "3425985d-d02a-4566-b1c3-ae15f48de2a0"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.201874 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wn4z\" (UniqueName: \"kubernetes.io/projected/3425985d-d02a-4566-b1c3-ae15f48de2a0-kube-api-access-5wn4z\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.201908 4820 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3425985d-d02a-4566-b1c3-ae15f48de2a0-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z77q6" event={"ID":"3425985d-d02a-4566-b1c3-ae15f48de2a0","Type":"ContainerDied","Data":"ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8"} Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706809 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z77q6" Feb 21 08:01:29 crc kubenswrapper[4820]: I0221 08:01:29.706813 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecccc54d66bba4ecd6af298662d91b27db96c14ff355eb9a2b966fdb18fb43d8" Feb 21 08:02:13 crc kubenswrapper[4820]: I0221 08:02:13.816203 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:02:13 crc kubenswrapper[4820]: I0221 08:02:13.816908 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:02:17 crc kubenswrapper[4820]: I0221 08:02:17.946635 4820 scope.go:117] "RemoveContainer" containerID="edb2f0d9506d60a67187b5d382cfd1305f456f91506d3822d04d40dbb03ad374" Feb 21 08:02:43 crc kubenswrapper[4820]: I0221 08:02:43.816974 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:02:43 crc kubenswrapper[4820]: I0221 08:02:43.819306 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.815835 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.816544 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.816588 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.817201 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:03:13 crc kubenswrapper[4820]: I0221 08:03:13.817286 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" gracePeriod=600 Feb 21 08:03:13 crc kubenswrapper[4820]: E0221 08:03:13.953832 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455020 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" exitCode=0 Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83"} Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455106 4820 scope.go:117] "RemoveContainer" containerID="346b88f027dcad0ac2b4eeb21d59bcbf957c89a096d7ee7e9fc70f006fda192b" Feb 21 08:03:14 crc kubenswrapper[4820]: I0221 08:03:14.455608 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:14 crc kubenswrapper[4820]: E0221 08:03:14.455957 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:27 crc kubenswrapper[4820]: I0221 08:03:27.696761 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:27 crc kubenswrapper[4820]: E0221 08:03:27.697630 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:38 crc kubenswrapper[4820]: I0221 08:03:38.696384 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:38 crc kubenswrapper[4820]: E0221 08:03:38.696987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:51 crc kubenswrapper[4820]: I0221 08:03:51.698583 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:03:51 crc kubenswrapper[4820]: E0221 08:03:51.699549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398095 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: E0221 08:03:53.398427 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398441 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.398558 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3425985d-d02a-4566-b1c3-ae15f48de2a0" containerName="storage" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.399227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.404759 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405183 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405542 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.405705 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vh5w4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.412368 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.416747 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.425999 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.443094 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.453676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469275 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.469368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570323 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.570519 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.571469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.572648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.573281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.601039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"dnsmasq-dns-7f67b98cb7-pjgtn\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.604297 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"dnsmasq-dns-774d9db845-q2fn4\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.631348 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.632043 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.656544 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.658487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.671661 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.675083 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.741592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.773666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.774810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.775767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.809492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"dnsmasq-dns-787c4dc9-p57fq\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.937337 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.937849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.973077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:53 crc kubenswrapper[4820]: I0221 08:03:53.974560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.005670 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.078083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.078181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.079061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.180740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.181737 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.181757 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.201257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"dnsmasq-dns-bb88b7bf5-mdrlh\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.203724 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.266413 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.305111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.331096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.760025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.779045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" event={"ID":"6f3506e9-072b-4eef-afbb-95daa9d0a56d","Type":"ContainerStarted","Data":"d0f2ab4d624cd6284f03423f831cb678e5ef49c1e3adcba6f173f8f5e8d13c31"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.781454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerStarted","Data":"017b50f9e39ade686cd66378baac106456851ad2545fb098c57598953b748fb6"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.782596 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerStarted","Data":"f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd"} Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.796383 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.797633 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.800694 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801464 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801536 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7r6cd" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801606 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801688 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.801792 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.803974 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.812013 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.991727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.996921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997101 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:54 crc kubenswrapper[4820]: I0221 08:03:54.997177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.086755 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.090112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.091810 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.092029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.092628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.096734 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097258 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z7jtg" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097417 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.097766 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098783 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098852 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098872 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098925 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.098964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.103735 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.106600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.106951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.107662 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.107802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.108531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.139182 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.139230 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/934de71409e5f275cb94cfa922d2597bbcc02a71598b29b6833fab6760155167/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.191660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.192051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.202950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.203149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204186 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204280 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.204674 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.210767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.210883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.211573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.248295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306875 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.306992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.307833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.308653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.310124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.311813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.312649 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.312753 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb4bbdf2b86e995ba706b4b62c0c402d7bc60ad53da33c49f02f1a8b30c7c64a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.314382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.315824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.330148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.335963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.355814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.438347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.446131 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.816702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerStarted","Data":"c0fbba8abdf7dcc3b8fefeafbe554110877b155984d0717a8a1b1d9fb8c1f3ce"} Feb 21 08:03:55 crc kubenswrapper[4820]: I0221 08:03:55.962385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:03:55 crc kubenswrapper[4820]: W0221 08:03:55.979220 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1252400_6674_4a2e_a4ad_dc8f7fc45dee.slice/crio-e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128 WatchSource:0}: Error finding container e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128: Status 404 returned error can't find the container with id e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128 Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.079368 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: W0221 08:03:56.087515 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51a301_b647_44f6_bd29_7db35420fa2c.slice/crio-cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3 WatchSource:0}: Error finding container cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3: Status 404 returned error can't find the container with id cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3 Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.102051 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.107593 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110358 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110475 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.110503 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-p6xzl" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.111945 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.115675 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.130366 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226096 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226153 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226404 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.226533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.327911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328496 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.328528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.329938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/21d2b3a6-8a28-4287-8953-23782681799a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.331217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-config-data-default\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.331367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-kolla-config\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.332008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d2b3a6-8a28-4287-8953-23782681799a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.334635 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.334669 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eec13e17780b306286e9ac1088ca2a300c26c16fb52d56a613cbee8d6a6cb356/globalmount\"" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.335529 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.335826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d2b3a6-8a28-4287-8953-23782681799a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.346687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrh97\" (UniqueName: \"kubernetes.io/projected/21d2b3a6-8a28-4287-8953-23782681799a-kube-api-access-nrh97\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.381809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce534ea6-da62-4b0b-bb7c-291acc11ea96\") pod \"openstack-galera-0\" (UID: \"21d2b3a6-8a28-4287-8953-23782681799a\") " pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.445611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.825060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3"} Feb 21 08:03:56 crc kubenswrapper[4820]: I0221 08:03:56.825999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128"} Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.555628 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: W0221 08:03:57.558826 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21d2b3a6_8a28_4287_8953_23782681799a.slice/crio-a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf WatchSource:0}: Error finding container a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf: Status 404 returned error can't find the container with id a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.718214 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.720164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.723523 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727579 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8pfnn" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727731 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727803 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.727864 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.837642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"a814671671097a624cd72202261cc67fe503741f93e1c2a73e5529b3f757facf"} Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865717 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865773 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.865962 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.967954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968155 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.968661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.970826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.971585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.979002 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.979460 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8833ec5ee5aa131adee272449360987867638031830eb2bbc628affc6d67dded/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.982022 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:57 crc kubenswrapper[4820]: I0221 08:03:57.984727 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.000857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bltzr\" (UniqueName: \"kubernetes.io/projected/e0a14fdd-7df9-4cac-aa21-b4562f320fcc-kube-api-access-bltzr\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.035260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2c9fcaf-3525-4616-92e2-b9383add219a\") pod \"openstack-cell1-galera-0\" (UID: \"e0a14fdd-7df9-4cac-aa21-b4562f320fcc\") " pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.080740 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.081844 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.085467 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.087349 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.087430 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v4mhk" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.114568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175634 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175674 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.175736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277558 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.277644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.278874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-config-data\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.282226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kolla-config\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.283432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.283729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c039fd9-87df-497c-8e40-f9b5d2759d0f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.292655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8lff\" (UniqueName: \"kubernetes.io/projected/4c039fd9-87df-497c-8e40-f9b5d2759d0f-kube-api-access-j8lff\") pod \"memcached-0\" (UID: \"4c039fd9-87df-497c-8e40-f9b5d2759d0f\") " pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.342794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.402743 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.842607 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: W0221 08:03:58.858265 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a14fdd_7df9_4cac_aa21_b4562f320fcc.slice/crio-f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db WatchSource:0}: Error finding container f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db: Status 404 returned error can't find the container with id f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db Feb 21 08:03:58 crc kubenswrapper[4820]: I0221 08:03:58.911938 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 21 08:03:58 crc kubenswrapper[4820]: W0221 08:03:58.917495 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c039fd9_87df_497c_8e40_f9b5d2759d0f.slice/crio-19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89 WatchSource:0}: Error finding container 19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89: Status 404 returned error can't find the container with id 19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89 Feb 21 08:03:59 crc kubenswrapper[4820]: I0221 08:03:59.938679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"f8f06ef502eb2b3d1a1442c8c96b3013d4bdf04d421b6578714442ab5fa3c2db"} Feb 21 08:03:59 crc kubenswrapper[4820]: I0221 08:03:59.948606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4c039fd9-87df-497c-8e40-f9b5d2759d0f","Type":"ContainerStarted","Data":"19c3153e5e1e140fc30da644f933b01ed4a61adb07a14acc1b9ffb1347101e89"} Feb 21 08:04:03 crc kubenswrapper[4820]: I0221 08:04:03.697499 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:03 crc kubenswrapper[4820]: E0221 08:04:03.698008 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:14 crc kubenswrapper[4820]: I0221 08:04:14.696767 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:14 crc kubenswrapper[4820]: E0221 08:04:14.697586 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.238275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.240535 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.252078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.390865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.492645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.493172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.493190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.690803 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"redhat-operators-vm7mv\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:16 crc kubenswrapper[4820]: I0221 08:04:16.863036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.305868 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.306750 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.306916 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhfzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(3d51a301-b647-44f6-bd29-7db35420fa2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.308400 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.368941 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.368988 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.369118 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8c69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e1252400-6674-4a2e-a4ad-dc8f7fc45dee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.370495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.406953 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.407021 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.407160 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h564h676h699hcdh67bh66hfdh569h545h648h94h546h696h668h89h96h667h575h595h5d9h584h8dhbdh697h54bhb7h58fh5c9hd8h5cdh5c7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkxxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bb88b7bf5-mdrlh_openstack(95cd39a3-df2b-4f19-bf18-d5fcf790995e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.408426 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.431647 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.432008 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.432212 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fdpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-774d9db845-q2fn4_openstack(6f3506e9-072b-4eef-afbb-95daa9d0a56d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.433874 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" podUID="6f3506e9-072b-4eef-afbb-95daa9d0a56d" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.484904 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" Feb 21 08:04:24 crc kubenswrapper[4820]: E0221 08:04:24.485225 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.828102 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:24 crc kubenswrapper[4820]: W0221 08:04:24.878830 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cfc6863_b2a0_4a8b_8445_d5bdc742e722.slice/crio-9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e WatchSource:0}: Error finding container 9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e: Status 404 returned error can't find the container with id 9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.879759 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.965816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") pod \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.965884 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") pod \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\" (UID: \"6f3506e9-072b-4eef-afbb-95daa9d0a56d\") " Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.966627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config" (OuterVolumeSpecName: "config") pod "6f3506e9-072b-4eef-afbb-95daa9d0a56d" (UID: "6f3506e9-072b-4eef-afbb-95daa9d0a56d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:24 crc kubenswrapper[4820]: I0221 08:04:24.973812 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv" (OuterVolumeSpecName: "kube-api-access-5fdpv") pod "6f3506e9-072b-4eef-afbb-95daa9d0a56d" (UID: "6f3506e9-072b-4eef-afbb-95daa9d0a56d"). InnerVolumeSpecName "kube-api-access-5fdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.068025 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fdpv\" (UniqueName: \"kubernetes.io/projected/6f3506e9-072b-4eef-afbb-95daa9d0a56d-kube-api-access-5fdpv\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.068065 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3506e9-072b-4eef-afbb-95daa9d0a56d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.491464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4c039fd9-87df-497c-8e40-f9b5d2759d0f","Type":"ContainerStarted","Data":"9525b66628ad454cf7d4346179fcfdaa2a8305dcd5202e40243ffa35d0570c8d"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.491544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.493512 4820 generic.go:334] "Generic (PLEG): container finished" podID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.493587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497729 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.497815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.499002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" event={"ID":"6f3506e9-072b-4eef-afbb-95daa9d0a56d","Type":"ContainerDied","Data":"d0f2ab4d624cd6284f03423f831cb678e5ef49c1e3adcba6f173f8f5e8d13c31"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.499129 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774d9db845-q2fn4" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.505332 4820 generic.go:334] "Generic (PLEG): container finished" podID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.505384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.508779 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.511102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.512716 4820 generic.go:334] "Generic (PLEG): container finished" podID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerID="4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134" exitCode=0 Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.512747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerDied","Data":"4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134"} Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.522643 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.079752077 podStartE2EDuration="27.522617363s" podCreationTimestamp="2026-02-21 08:03:58 +0000 UTC" firstStartedPulling="2026-02-21 08:03:58.919695962 +0000 UTC m=+4613.952780160" lastFinishedPulling="2026-02-21 08:04:24.362561248 +0000 UTC m=+4639.395645446" observedRunningTime="2026-02-21 08:04:25.514564555 +0000 UTC m=+4640.547648753" watchObservedRunningTime="2026-02-21 08:04:25.522617363 +0000 UTC m=+4640.555701561" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.698697 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.699339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.735090 4820 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 21 08:04:25 crc kubenswrapper[4820]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 08:04:25 crc kubenswrapper[4820]: > podSandboxID="f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.735364 4820 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 21 08:04:25 crc kubenswrapper[4820]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h54dhb7h666h69h76h59ch55ch65ch596h8h79h5c8h57hc8hfch5d7h697h79h698h5fch644hf9h54chbfh655hfchcbh5f8h646h5f7h89q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8j4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f67b98cb7-pjgtn_openstack(5941b7b4-35ad-4016-b1bc-46b485dc8105): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 21 08:04:25 crc kubenswrapper[4820]: > logger="UnhandledError" Feb 21 08:04:25 crc kubenswrapper[4820]: E0221 08:04:25.736536 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.740334 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.768591 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774d9db845-q2fn4"] Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.866958 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992449 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.992595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") pod \"6637ce38-7cdd-4970-b22e-0762f51447f8\" (UID: \"6637ce38-7cdd-4970-b22e-0762f51447f8\") " Feb 21 08:04:25 crc kubenswrapper[4820]: I0221 08:04:25.996516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k" (OuterVolumeSpecName: "kube-api-access-f465k") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "kube-api-access-f465k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.009147 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.011537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config" (OuterVolumeSpecName: "config") pod "6637ce38-7cdd-4970-b22e-0762f51447f8" (UID: "6637ce38-7cdd-4970-b22e-0762f51447f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094356 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094391 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6637ce38-7cdd-4970-b22e-0762f51447f8-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.094401 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f465k\" (UniqueName: \"kubernetes.io/projected/6637ce38-7cdd-4970-b22e-0762f51447f8-kube-api-access-f465k\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.520607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerStarted","Data":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.522187 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" event={"ID":"6637ce38-7cdd-4970-b22e-0762f51447f8","Type":"ContainerDied","Data":"017b50f9e39ade686cd66378baac106456851ad2545fb098c57598953b748fb6"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523140 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4dc9-p57fq" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.523171 4820 scope.go:117] "RemoveContainer" containerID="4ec48208d2bb76745c2ed2f718a34d9b29c4fe3273e329096ce615fb67617134" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.528868 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.562041 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podStartSLOduration=-9223372003.29276 podStartE2EDuration="33.562015907s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:54.77934964 +0000 UTC m=+4609.812433838" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:04:26.545822499 +0000 UTC m=+4641.578906697" watchObservedRunningTime="2026-02-21 08:04:26.562015907 +0000 UTC m=+4641.595100105" Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.642092 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:04:26 crc kubenswrapper[4820]: I0221 08:04:26.647794 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-787c4dc9-p57fq"] Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.537017 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerStarted","Data":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.537811 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.539305 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" exitCode=0 Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.539388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.558853 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podStartSLOduration=4.54615551 podStartE2EDuration="34.55883231s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:54.355374486 +0000 UTC m=+4609.388458694" lastFinishedPulling="2026-02-21 08:04:24.368051306 +0000 UTC m=+4639.401135494" observedRunningTime="2026-02-21 08:04:27.556721333 +0000 UTC m=+4642.589805541" watchObservedRunningTime="2026-02-21 08:04:27.55883231 +0000 UTC m=+4642.591916508" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.706115 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" path="/var/lib/kubelet/pods/6637ce38-7cdd-4970-b22e-0762f51447f8/volumes" Feb 21 08:04:27 crc kubenswrapper[4820]: I0221 08:04:27.706862 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3506e9-072b-4eef-afbb-95daa9d0a56d" path="/var/lib/kubelet/pods/6f3506e9-072b-4eef-afbb-95daa9d0a56d/volumes" Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.548622 4820 generic.go:334] "Generic (PLEG): container finished" podID="e0a14fdd-7df9-4cac-aa21-b4562f320fcc" containerID="6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916" exitCode=0 Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.548687 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerDied","Data":"6f05812bb10b6474f161516b62d59eb3e07f9fe7d773d15e51ec5c4ca6610916"} Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.553009 4820 generic.go:334] "Generic (PLEG): container finished" podID="21d2b3a6-8a28-4287-8953-23782681799a" containerID="e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633" exitCode=0 Feb 21 08:04:28 crc kubenswrapper[4820]: I0221 08:04:28.553075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerDied","Data":"e6b12272abf060e7bc7c4a5ba0d46a1b9858146a60229cfbceed9596a5eb6633"} Feb 21 08:04:33 crc kubenswrapper[4820]: I0221 08:04:33.404301 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 21 08:04:33 crc kubenswrapper[4820]: I0221 08:04:33.743537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.307396 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.377552 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.596832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerStarted","Data":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.600184 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0a14fdd-7df9-4cac-aa21-b4562f320fcc","Type":"ContainerStarted","Data":"c202736b70c3c50812e2fb33af94d562a988d35bcd83dc4b9d88d0b245141ccf"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.605604 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" containerID="cri-o://57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" gracePeriod=10 Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.605799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"21d2b3a6-8a28-4287-8953-23782681799a","Type":"ContainerStarted","Data":"ce3abed658322c71c990f6ab5191ec01dfe91aad9111d477e6c24792ec8d9bf4"} Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.636607 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vm7mv" podStartSLOduration=9.917228195 podStartE2EDuration="18.636584966s" podCreationTimestamp="2026-02-21 08:04:16 +0000 UTC" firstStartedPulling="2026-02-21 08:04:25.502462578 +0000 UTC m=+4640.535546776" lastFinishedPulling="2026-02-21 08:04:34.221819349 +0000 UTC m=+4649.254903547" observedRunningTime="2026-02-21 08:04:34.622948087 +0000 UTC m=+4649.656032295" watchObservedRunningTime="2026-02-21 08:04:34.636584966 +0000 UTC m=+4649.669669164" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.664519 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.161269163 podStartE2EDuration="38.66448788s" podCreationTimestamp="2026-02-21 08:03:56 +0000 UTC" firstStartedPulling="2026-02-21 08:03:58.862463835 +0000 UTC m=+4613.895548033" lastFinishedPulling="2026-02-21 08:04:24.365682552 +0000 UTC m=+4639.398766750" observedRunningTime="2026-02-21 08:04:34.661083517 +0000 UTC m=+4649.694167705" watchObservedRunningTime="2026-02-21 08:04:34.66448788 +0000 UTC m=+4649.697572078" Feb 21 08:04:34 crc kubenswrapper[4820]: I0221 08:04:34.697611 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.840901586 podStartE2EDuration="39.697563783s" podCreationTimestamp="2026-02-21 08:03:55 +0000 UTC" firstStartedPulling="2026-02-21 08:03:57.560322742 +0000 UTC m=+4612.593406940" lastFinishedPulling="2026-02-21 08:04:24.416984939 +0000 UTC m=+4639.450069137" observedRunningTime="2026-02-21 08:04:34.695640651 +0000 UTC m=+4649.728724849" watchObservedRunningTime="2026-02-21 08:04:34.697563783 +0000 UTC m=+4649.730647981" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.025753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154805 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.154952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") pod \"5941b7b4-35ad-4016-b1bc-46b485dc8105\" (UID: \"5941b7b4-35ad-4016-b1bc-46b485dc8105\") " Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.172313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w" (OuterVolumeSpecName: "kube-api-access-x8j4w") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "kube-api-access-x8j4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.187979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config" (OuterVolumeSpecName: "config") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.201174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5941b7b4-35ad-4016-b1bc-46b485dc8105" (UID: "5941b7b4-35ad-4016-b1bc-46b485dc8105"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257026 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257068 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8j4w\" (UniqueName: \"kubernetes.io/projected/5941b7b4-35ad-4016-b1bc-46b485dc8105-kube-api-access-x8j4w\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.257082 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5941b7b4-35ad-4016-b1bc-46b485dc8105-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614034 4820 generic.go:334] "Generic (PLEG): container finished" podID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" exitCode=0 Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614087 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614084 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f67b98cb7-pjgtn" event={"ID":"5941b7b4-35ad-4016-b1bc-46b485dc8105","Type":"ContainerDied","Data":"f48007f449d91c484371d5c8f69062d993cf3aaa4ef67c33803bc68750b06ebd"} Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.614289 4820 scope.go:117] "RemoveContainer" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.631871 4820 scope.go:117] "RemoveContainer" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658155 4820 scope.go:117] "RemoveContainer" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658486 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:35 crc kubenswrapper[4820]: E0221 08:04:35.658780 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": container with ID starting with 57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26 not found: ID does not exist" containerID="57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658827 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26"} err="failed to get container status \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": rpc error: code = NotFound desc = could not find container \"57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26\": container with ID starting with 57f813d470b77d02110bcd72b8c67ea9ba5478fa2ba16de7904a49b17fd77a26 not found: ID does not exist" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.658852 4820 scope.go:117] "RemoveContainer" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: E0221 08:04:35.659200 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": container with ID starting with 575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae not found: ID does not exist" containerID="575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.659226 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae"} err="failed to get container status \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": rpc error: code = NotFound desc = could not find container \"575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae\": container with ID starting with 575eb49dbdcb5a3e67e9f61162ea5b194fc69c2a6bdb18f60af97883bbf26cae not found: ID does not exist" Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.665214 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f67b98cb7-pjgtn"] Feb 21 08:04:35 crc kubenswrapper[4820]: I0221 08:04:35.708599 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" path="/var/lib/kubelet/pods/5941b7b4-35ad-4016-b1bc-46b485dc8105/volumes" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.675644 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.676746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.697269 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:36 crc kubenswrapper[4820]: E0221 08:04:36.697496 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.864080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:36 crc kubenswrapper[4820]: I0221 08:04:36.864143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:37 crc kubenswrapper[4820]: I0221 08:04:37.909274 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vm7mv" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" probeResult="failure" output=< Feb 21 08:04:37 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:04:37 crc kubenswrapper[4820]: > Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.342989 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.343052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.410963 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:38 crc kubenswrapper[4820]: I0221 08:04:38.778338 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 21 08:04:39 crc kubenswrapper[4820]: I0221 08:04:39.201579 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 21 08:04:39 crc kubenswrapper[4820]: I0221 08:04:39.276025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 21 08:04:41 crc kubenswrapper[4820]: I0221 08:04:41.722995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d"} Feb 21 08:04:41 crc kubenswrapper[4820]: I0221 08:04:41.724646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36"} Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.104426 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.104995 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105007 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.105019 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105026 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: E0221 08:04:45.105035 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105040 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105212 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5941b7b4-35ad-4016-b1bc-46b485dc8105" containerName="dnsmasq-dns" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105229 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6637ce38-7cdd-4970-b22e-0762f51447f8" containerName="init" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.105810 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.109431 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.115845 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.141332 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.141420 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.243058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.243460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.244163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.260849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"root-account-create-update-v6p64\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.458331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:45 crc kubenswrapper[4820]: I0221 08:04:45.882880 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:45 crc kubenswrapper[4820]: W0221 08:04:45.886992 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041a8286_eca6_4595_8c96_dd70be516a57.slice/crio-6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57 WatchSource:0}: Error finding container 6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57: Status 404 returned error can't find the container with id 6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57 Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760607 4820 generic.go:334] "Generic (PLEG): container finished" podID="041a8286-eca6-4595-8c96-dd70be516a57" containerID="eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f" exitCode=0 Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerDied","Data":"eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f"} Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.760719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerStarted","Data":"6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57"} Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.919100 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:46 crc kubenswrapper[4820]: I0221 08:04:46.960660 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:47 crc kubenswrapper[4820]: I0221 08:04:47.434451 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.006507 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091249 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") pod \"041a8286-eca6-4595-8c96-dd70be516a57\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091296 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") pod \"041a8286-eca6-4595-8c96-dd70be516a57\" (UID: \"041a8286-eca6-4595-8c96-dd70be516a57\") " Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.091679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041a8286-eca6-4595-8c96-dd70be516a57" (UID: "041a8286-eca6-4595-8c96-dd70be516a57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.095225 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg" (OuterVolumeSpecName: "kube-api-access-v8jwg") pod "041a8286-eca6-4595-8c96-dd70be516a57" (UID: "041a8286-eca6-4595-8c96-dd70be516a57"). InnerVolumeSpecName "kube-api-access-v8jwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.192516 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041a8286-eca6-4595-8c96-dd70be516a57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.192742 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jwg\" (UniqueName: \"kubernetes.io/projected/041a8286-eca6-4595-8c96-dd70be516a57-kube-api-access-v8jwg\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.697225 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:04:48 crc kubenswrapper[4820]: E0221 08:04:48.697469 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774859 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v6p64" event={"ID":"041a8286-eca6-4595-8c96-dd70be516a57","Type":"ContainerDied","Data":"6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57"} Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774895 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v6p64" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.774904 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6321e894ebcb1e05a3b9d8b953f80322e9687c8ca4fb7093f10b1e0f73e78b57" Feb 21 08:04:48 crc kubenswrapper[4820]: I0221 08:04:48.775015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vm7mv" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" containerID="cri-o://d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" gracePeriod=2 Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.145432 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.215669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") pod \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\" (UID: \"3cfc6863-b2a0-4a8b-8445-d5bdc742e722\") " Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.216710 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities" (OuterVolumeSpecName: "utilities") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.231488 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r" (OuterVolumeSpecName: "kube-api-access-v567r") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "kube-api-access-v567r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.317775 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.317814 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v567r\" (UniqueName: \"kubernetes.io/projected/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-kube-api-access-v567r\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.339392 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cfc6863-b2a0-4a8b-8445-d5bdc742e722" (UID: "3cfc6863-b2a0-4a8b-8445-d5bdc742e722"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.418601 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfc6863-b2a0-4a8b-8445-d5bdc742e722-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783731 4820 generic.go:334] "Generic (PLEG): container finished" podID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" exitCode=0 Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783776 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vm7mv" event={"ID":"3cfc6863-b2a0-4a8b-8445-d5bdc742e722","Type":"ContainerDied","Data":"9ef816a06630805baa562657ff68966e145fde66b4685314138084555b4c9c6e"} Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783803 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vm7mv" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.783888 4820 scope.go:117] "RemoveContainer" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.804391 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.807546 4820 scope.go:117] "RemoveContainer" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.808276 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vm7mv"] Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.831437 4820 scope.go:117] "RemoveContainer" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.850603 4820 scope.go:117] "RemoveContainer" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.853362 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": container with ID starting with d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d not found: ID does not exist" containerID="d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853394 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d"} err="failed to get container status \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": rpc error: code = NotFound desc = could not find container \"d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d\": container with ID starting with d3b69629a040db3ccd4fbacceb646ed8e260aca874a9ae1da59fb78038492c7d not found: ID does not exist" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853416 4820 scope.go:117] "RemoveContainer" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.853786 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": container with ID starting with 0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0 not found: ID does not exist" containerID="0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853841 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0"} err="failed to get container status \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": rpc error: code = NotFound desc = could not find container \"0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0\": container with ID starting with 0506d7efe9708cfbf890df1baf4622ea77d4c42a9af3027d920d5fda9e53e7b0 not found: ID does not exist" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.853856 4820 scope.go:117] "RemoveContainer" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: E0221 08:04:49.854207 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": container with ID starting with bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b not found: ID does not exist" containerID="bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b" Feb 21 08:04:49 crc kubenswrapper[4820]: I0221 08:04:49.854227 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b"} err="failed to get container status \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": rpc error: code = NotFound desc = could not find container \"bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b\": container with ID starting with bcde44e6b9b43a1a497851c64271f3b76bffadfd4e48133798238f8a4c07457b not found: ID does not exist" Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.713912 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" path="/var/lib/kubelet/pods/3cfc6863-b2a0-4a8b-8445-d5bdc742e722/volumes" Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.714965 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:51 crc kubenswrapper[4820]: I0221 08:04:51.723287 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v6p64"] Feb 21 08:04:53 crc kubenswrapper[4820]: I0221 08:04:53.705989 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041a8286-eca6-4595-8c96-dd70be516a57" path="/var/lib/kubelet/pods/041a8286-eca6-4595-8c96-dd70be516a57/volumes" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.713987 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714676 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-utilities" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714694 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-utilities" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714717 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714724 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714738 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-content" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714744 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="extract-content" Feb 21 08:04:56 crc kubenswrapper[4820]: E0221 08:04:56.714760 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714768 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714937 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfc6863-b2a0-4a8b-8445-d5bdc742e722" containerName="registry-server" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.714949 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="041a8286-eca6-4595-8c96-dd70be516a57" containerName="mariadb-account-create-update" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.715526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.717855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.730039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.866307 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.866398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.967747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.967819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.968801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:56 crc kubenswrapper[4820]: I0221 08:04:56.990471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"root-account-create-update-pnwbk\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.072715 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:04:57 crc kubenswrapper[4820]: W0221 08:04:57.505516 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a608f92_6849_4847_9b75_495f1d27b9cf.slice/crio-c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7 WatchSource:0}: Error finding container c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7: Status 404 returned error can't find the container with id c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7 Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.506441 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:04:57 crc kubenswrapper[4820]: I0221 08:04:57.842775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerStarted","Data":"c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7"} Feb 21 08:04:58 crc kubenswrapper[4820]: I0221 08:04:58.856557 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerID="3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce" exitCode=0 Feb 21 08:04:58 crc kubenswrapper[4820]: I0221 08:04:58.856598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerDied","Data":"3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce"} Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.349614 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.526761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") pod \"5a608f92-6849-4847-9b75-495f1d27b9cf\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.526900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") pod \"5a608f92-6849-4847-9b75-495f1d27b9cf\" (UID: \"5a608f92-6849-4847-9b75-495f1d27b9cf\") " Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.527621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a608f92-6849-4847-9b75-495f1d27b9cf" (UID: "5a608f92-6849-4847-9b75-495f1d27b9cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.531529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2" (OuterVolumeSpecName: "kube-api-access-62fg2") pod "5a608f92-6849-4847-9b75-495f1d27b9cf" (UID: "5a608f92-6849-4847-9b75-495f1d27b9cf"). InnerVolumeSpecName "kube-api-access-62fg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.629397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fg2\" (UniqueName: \"kubernetes.io/projected/5a608f92-6849-4847-9b75-495f1d27b9cf-kube-api-access-62fg2\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.629433 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a608f92-6849-4847-9b75-495f1d27b9cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.696572 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:00 crc kubenswrapper[4820]: E0221 08:05:00.696816 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnwbk" event={"ID":"5a608f92-6849-4847-9b75-495f1d27b9cf","Type":"ContainerDied","Data":"c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7"} Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870203 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c304521e78b415324320fa7c6d4d01af4d6a2c8ae5423dae1c6e1c4dc91d04c7" Feb 21 08:05:00 crc kubenswrapper[4820]: I0221 08:05:00.870208 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnwbk" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.259541 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:11 crc kubenswrapper[4820]: E0221 08:05:11.260307 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.260318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.260473 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" containerName="mariadb-account-create-update" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.261553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.274379 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.394705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.496403 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.497235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.497533 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.530603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"redhat-marketplace-wc562\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:11 crc kubenswrapper[4820]: I0221 08:05:11.593615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.029100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.964933 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" exitCode=0 Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.965007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564"} Feb 21 08:05:12 crc kubenswrapper[4820]: I0221 08:05:12.965250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"efd1e37053ca7b159083d52c5a4734be25b2c4ff60daf6987b203225c6e020f2"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.697064 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:13 crc kubenswrapper[4820]: E0221 08:05:13.697593 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.975401 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerID="ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d" exitCode=0 Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.975474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.979222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.997130 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerID="e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36" exitCode=0 Feb 21 08:05:13 crc kubenswrapper[4820]: I0221 08:05:13.997437 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.005971 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerStarted","Data":"7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.006559 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.008466 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" exitCode=0 Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.008538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.010810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerStarted","Data":"9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848"} Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.011387 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.031679 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.203623955 podStartE2EDuration="1m21.03165861s" podCreationTimestamp="2026-02-21 08:03:54 +0000 UTC" firstStartedPulling="2026-02-21 08:03:56.091267798 +0000 UTC m=+4611.124351986" lastFinishedPulling="2026-02-21 08:04:39.919302443 +0000 UTC m=+4654.952386641" observedRunningTime="2026-02-21 08:05:15.031523676 +0000 UTC m=+4690.064607884" watchObservedRunningTime="2026-02-21 08:05:15.03165861 +0000 UTC m=+4690.064742808" Feb 21 08:05:15 crc kubenswrapper[4820]: I0221 08:05:15.079695 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.145840415 podStartE2EDuration="1m22.079680388s" podCreationTimestamp="2026-02-21 08:03:53 +0000 UTC" firstStartedPulling="2026-02-21 08:03:55.984248857 +0000 UTC m=+4611.017333055" lastFinishedPulling="2026-02-21 08:04:39.91808883 +0000 UTC m=+4654.951173028" observedRunningTime="2026-02-21 08:05:15.076048539 +0000 UTC m=+4690.109132737" watchObservedRunningTime="2026-02-21 08:05:15.079680388 +0000 UTC m=+4690.112764586" Feb 21 08:05:17 crc kubenswrapper[4820]: I0221 08:05:17.029546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerStarted","Data":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} Feb 21 08:05:17 crc kubenswrapper[4820]: I0221 08:05:17.061035 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc562" podStartSLOduration=3.232238631 podStartE2EDuration="6.061020623s" podCreationTimestamp="2026-02-21 08:05:11 +0000 UTC" firstStartedPulling="2026-02-21 08:05:12.966283945 +0000 UTC m=+4687.999368143" lastFinishedPulling="2026-02-21 08:05:15.795065937 +0000 UTC m=+4690.828150135" observedRunningTime="2026-02-21 08:05:17.056001007 +0000 UTC m=+4692.089085195" watchObservedRunningTime="2026-02-21 08:05:17.061020623 +0000 UTC m=+4692.094104811" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.594882 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.595505 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:21 crc kubenswrapper[4820]: I0221 08:05:21.648353 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:22 crc kubenswrapper[4820]: I0221 08:05:22.105934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:22 crc kubenswrapper[4820]: I0221 08:05:22.160596 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.072419 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc562" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" containerID="cri-o://f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" gracePeriod=2 Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.600767 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746724 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.746763 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") pod \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\" (UID: \"dc79f9d5-f05f-41ee-849f-1c29ec7b382a\") " Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.747629 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities" (OuterVolumeSpecName: "utilities") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.751647 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr" (OuterVolumeSpecName: "kube-api-access-2ccdr") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "kube-api-access-2ccdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.783020 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc79f9d5-f05f-41ee-849f-1c29ec7b382a" (UID: "dc79f9d5-f05f-41ee-849f-1c29ec7b382a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849319 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849370 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:24 crc kubenswrapper[4820]: I0221 08:05:24.849388 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ccdr\" (UniqueName: \"kubernetes.io/projected/dc79f9d5-f05f-41ee-849f-1c29ec7b382a-kube-api-access-2ccdr\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080921 4820 generic.go:334] "Generic (PLEG): container finished" podID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" exitCode=0 Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080976 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc562" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.080995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc562" event={"ID":"dc79f9d5-f05f-41ee-849f-1c29ec7b382a","Type":"ContainerDied","Data":"efd1e37053ca7b159083d52c5a4734be25b2c4ff60daf6987b203225c6e020f2"} Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.081011 4820 scope.go:117] "RemoveContainer" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.099636 4820 scope.go:117] "RemoveContainer" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.120921 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.121806 4820 scope.go:117] "RemoveContainer" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.130347 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc562"] Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169132 4820 scope.go:117] "RemoveContainer" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.169567 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": container with ID starting with f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc not found: ID does not exist" containerID="f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169598 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc"} err="failed to get container status \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": rpc error: code = NotFound desc = could not find container \"f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc\": container with ID starting with f40866baae78fca52a1a8638160923d01732f256e4a5c9bf254a0fdfa96029cc not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.169619 4820 scope.go:117] "RemoveContainer" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.169936 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": container with ID starting with 3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db not found: ID does not exist" containerID="3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170001 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db"} err="failed to get container status \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": rpc error: code = NotFound desc = could not find container \"3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db\": container with ID starting with 3231c5a4485f1fb272981b30eb2cef8c4cfa8162531845b2bb2abf25e07a31db not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170036 4820 scope.go:117] "RemoveContainer" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: E0221 08:05:25.170723 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": container with ID starting with 006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564 not found: ID does not exist" containerID="006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.170751 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564"} err="failed to get container status \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": rpc error: code = NotFound desc = could not find container \"006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564\": container with ID starting with 006ee9eb6ee421180f6625dd17c179681cb47fa291874779ea5618407ad0a564 not found: ID does not exist" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.441440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.448463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:25 crc kubenswrapper[4820]: I0221 08:05:25.718829 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" path="/var/lib/kubelet/pods/dc79f9d5-f05f-41ee-849f-1c29ec7b382a/volumes" Feb 21 08:05:28 crc kubenswrapper[4820]: I0221 08:05:28.696263 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:28 crc kubenswrapper[4820]: E0221 08:05:28.696730 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.060936 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-utilities" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061765 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-utilities" Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061790 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061796 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: E0221 08:05:32.061807 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-content" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="extract-content" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.061940 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc79f9d5-f05f-41ee-849f-1c29ec7b382a" containerName="registry-server" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.063169 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.077155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155297 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.155399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.257377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.258275 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.258298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.283858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"dnsmasq-dns-79496f79cc-zmwwz\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:32 crc kubenswrapper[4820]: I0221 08:05:32.387855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:32.679614 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:32.813853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:05:33 crc kubenswrapper[4820]: W0221 08:05:32.815202 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8165e702_d96e_4273_8536_7e6e363482d4.slice/crio-41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c WatchSource:0}: Error finding container 41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c: Status 404 returned error can't find the container with id 41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137051 4820 generic.go:334] "Generic (PLEG): container finished" podID="8165e702-d96e-4273-8536-7e6e363482d4" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" exitCode=0 Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137089 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a"} Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.137113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerStarted","Data":"41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c"} Feb 21 08:05:33 crc kubenswrapper[4820]: I0221 08:05:33.264086 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.145664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerStarted","Data":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.145973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:34 crc kubenswrapper[4820]: I0221 08:05:34.172459 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" podStartSLOduration=2.172439757 podStartE2EDuration="2.172439757s" podCreationTimestamp="2026-02-21 08:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:05:34.167883114 +0000 UTC m=+4709.200967322" watchObservedRunningTime="2026-02-21 08:05:34.172439757 +0000 UTC m=+4709.205523955" Feb 21 08:05:36 crc kubenswrapper[4820]: I0221 08:05:36.874656 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" containerID="cri-o://9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" gracePeriod=604796 Feb 21 08:05:37 crc kubenswrapper[4820]: I0221 08:05:37.778179 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" containerID="cri-o://7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" gracePeriod=604796 Feb 21 08:05:40 crc kubenswrapper[4820]: I0221 08:05:40.697539 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:40 crc kubenswrapper[4820]: E0221 08:05:40.697994 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.389452 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.464231 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.464516 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" containerID="cri-o://95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" gracePeriod=10 Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.895947 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.971635 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") pod \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\" (UID: \"95cd39a3-df2b-4f19-bf18-d5fcf790995e\") " Feb 21 08:05:42 crc kubenswrapper[4820]: I0221 08:05:42.979504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn" (OuterVolumeSpecName: "kube-api-access-xkxxn") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "kube-api-access-xkxxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.005900 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.008347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config" (OuterVolumeSpecName: "config") pod "95cd39a3-df2b-4f19-bf18-d5fcf790995e" (UID: "95cd39a3-df2b-4f19-bf18-d5fcf790995e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073132 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxxn\" (UniqueName: \"kubernetes.io/projected/95cd39a3-df2b-4f19-bf18-d5fcf790995e-kube-api-access-xkxxn\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073192 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.073205 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cd39a3-df2b-4f19-bf18-d5fcf790995e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211119 4820 generic.go:334] "Generic (PLEG): container finished" podID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" exitCode=0 Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211403 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb88b7bf5-mdrlh" event={"ID":"95cd39a3-df2b-4f19-bf18-d5fcf790995e","Type":"ContainerDied","Data":"c0fbba8abdf7dcc3b8fefeafbe554110877b155984d0717a8a1b1d9fb8c1f3ce"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.211710 4820 scope.go:117] "RemoveContainer" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.217068 4820 generic.go:334] "Generic (PLEG): container finished" podID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerID="9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" exitCode=0 Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.217156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848"} Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.245136 4820 scope.go:117] "RemoveContainer" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.253538 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.268262 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb88b7bf5-mdrlh"] Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.270839 4820 scope.go:117] "RemoveContainer" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: E0221 08:05:43.271680 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": container with ID starting with 95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329 not found: ID does not exist" containerID="95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.271729 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329"} err="failed to get container status \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": rpc error: code = NotFound desc = could not find container \"95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329\": container with ID starting with 95a6cbb78b045bbe2a0daa771a036d82d9f55998aec10f678c8c917a42dfc329 not found: ID does not exist" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.271760 4820 scope.go:117] "RemoveContainer" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: E0221 08:05:43.272110 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": container with ID starting with 34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed not found: ID does not exist" containerID="34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.272151 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed"} err="failed to get container status \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": rpc error: code = NotFound desc = could not find container \"34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed\": container with ID starting with 34fac3b1fc075da5a69d6818a175a3eedef6253a2ef0c6c95f25628fba2416ed not found: ID does not exist" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.370573 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376375 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376409 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376458 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376538 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376606 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.376824 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") pod \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\" (UID: \"e1252400-6674-4a2e-a4ad-dc8f7fc45dee\") " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.377104 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379379 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info" (OuterVolumeSpecName: "pod-info") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.379996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69" (OuterVolumeSpecName: "kube-api-access-r8c69") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "kube-api-access-r8c69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.381023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.382320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.408101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb" (OuterVolumeSpecName: "persistence") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.421460 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data" (OuterVolumeSpecName: "config-data") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.437023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf" (OuterVolumeSpecName: "server-conf") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.464679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e1252400-6674-4a2e-a4ad-dc8f7fc45dee" (UID: "e1252400-6674-4a2e-a4ad-dc8f7fc45dee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478102 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478144 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478156 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8c69\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-kube-api-access-r8c69\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478164 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478172 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478181 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478188 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478195 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478203 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478258 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") on node \"crc\" " Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.478271 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1252400-6674-4a2e-a4ad-dc8f7fc45dee-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.492889 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.493098 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb") on node "crc" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.580028 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:43 crc kubenswrapper[4820]: I0221 08:05:43.707026 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" path="/var/lib/kubelet/pods/95cd39a3-df2b-4f19-bf18-d5fcf790995e/volumes" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.230098 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerID="7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" exitCode=0 Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.230145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f"} Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e1252400-6674-4a2e-a4ad-dc8f7fc45dee","Type":"ContainerDied","Data":"e59ba11908d2427e77b8370ad9023c1d9f5c91e436080bbf10d9a6e9cb31d128"} Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231437 4820 scope.go:117] "RemoveContainer" containerID="9dc2ce0225a16a318c6ee9facefcbd79822bc86ff89fda565eef1d4e97b96848" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.231593 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.308477 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.324144 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.324491 4820 scope.go:117] "RemoveContainer" containerID="e76872dbd8a799fbfe27270b370925e1ef8cf8b3b7eac3942e519639ee740a36" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.333164 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.372651 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373117 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="init" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373144 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="init" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373170 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373181 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373210 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373232 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373308 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="setup-container" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373351 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373373 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: E0221 08:05:44.373387 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373397 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373654 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373676 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" containerName="rabbitmq" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.373697 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cd39a3-df2b-4f19-bf18-d5fcf790995e" containerName="dnsmasq-dns" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.375112 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382266 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382554 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382756 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382772 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.382955 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.383059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7r6cd" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.383118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399362 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399417 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399541 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.399639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.401621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500526 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500608 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500670 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") pod \"3d51a301-b647-44f6-bd29-7db35420fa2c\" (UID: \"3d51a301-b647-44f6-bd29-7db35420fa2c\") " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500853 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500902 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.500991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.501930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.502063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.502964 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-config-data\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.503941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.505012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8195e98f-70c8-4758-9d0a-e3a95de45075-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507465 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info" (OuterVolumeSpecName: "pod-info") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.507969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr" (OuterVolumeSpecName: "kube-api-access-mhfzr") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "kube-api-access-mhfzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508303 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8195e98f-70c8-4758-9d0a-e3a95de45075-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.508431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.510736 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8195e98f-70c8-4758-9d0a-e3a95de45075-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data" (OuterVolumeSpecName: "config-data") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrgm\" (UniqueName: \"kubernetes.io/projected/8195e98f-70c8-4758-9d0a-e3a95de45075-kube-api-access-dnrgm\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522703 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.522738 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/934de71409e5f275cb94cfa922d2597bbcc02a71598b29b6833fab6760155167/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.525547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2" (OuterVolumeSpecName: "persistence") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "pvc-84000e1c-c116-40f4-8806-c604396f3af2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.552854 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf" (OuterVolumeSpecName: "server-conf") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.557644 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc80365f-e573-4aa1-8f4a-9a233e706efb\") pod \"rabbitmq-server-0\" (UID: \"8195e98f-70c8-4758-9d0a-e3a95de45075\") " pod="openstack/rabbitmq-server-0" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.593944 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3d51a301-b647-44f6-bd29-7db35420fa2c" (UID: "3d51a301-b647-44f6-bd29-7db35420fa2c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602773 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602802 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602813 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602824 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602833 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d51a301-b647-44f6-bd29-7db35420fa2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602842 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602863 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") on node \"crc\" " Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602872 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhfzr\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-kube-api-access-mhfzr\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602888 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d51a301-b647-44f6-bd29-7db35420fa2c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602896 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d51a301-b647-44f6-bd29-7db35420fa2c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.602904 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d51a301-b647-44f6-bd29-7db35420fa2c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.617878 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.618072 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-84000e1c-c116-40f4-8806-c604396f3af2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2") on node "crc" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.704210 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") on node \"crc\" DevicePath \"\"" Feb 21 08:05:44 crc kubenswrapper[4820]: I0221 08:05:44.708755 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.130127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.238538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"12e981471e18de239f46c75b4371041708ca1be059f57feb9a45c0ba679ff1ca"} Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d51a301-b647-44f6-bd29-7db35420fa2c","Type":"ContainerDied","Data":"cf57958c059ab57096160a8511fc8c2747fdeefcad62b9b3daad83060dc8e5c3"} Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.240415 4820 scope.go:117] "RemoveContainer" containerID="7b5b102bb5dc498916c31e2a47cabd7008dfcd32b5350d8d62ceae905597116f" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.270075 4820 scope.go:117] "RemoveContainer" containerID="ad4bf5f49615bc061cef8f25965d606a659e5e3641c1325e3aac53557e29705d" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.279307 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.285515 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.295771 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.297116 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.298995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.302860 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303259 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303457 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z7jtg" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.303575 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.304325 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.304725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.311222 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415006 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415049 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.415317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.516600 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.517750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.518961 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.518986 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb4bbdf2b86e995ba706b4b62c0c402d7bc60ad53da33c49f02f1a8b30c7c64a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.519930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.517782 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.519992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520926 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.520998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.521503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.522122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.523503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.523606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.524087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.524972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.543849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sn4j\" (UniqueName: \"kubernetes.io/projected/57d094d7-d5d2-4276-b0c2-cb98a15c0c3d-kube-api-access-7sn4j\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.554363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84000e1c-c116-40f4-8806-c604396f3af2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84000e1c-c116-40f4-8806-c604396f3af2\") pod \"rabbitmq-cell1-server-0\" (UID: \"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.635630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.714616 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d51a301-b647-44f6-bd29-7db35420fa2c" path="/var/lib/kubelet/pods/3d51a301-b647-44f6-bd29-7db35420fa2c/volumes" Feb 21 08:05:45 crc kubenswrapper[4820]: I0221 08:05:45.715938 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1252400-6674-4a2e-a4ad-dc8f7fc45dee" path="/var/lib/kubelet/pods/e1252400-6674-4a2e-a4ad-dc8f7fc45dee/volumes" Feb 21 08:05:46 crc kubenswrapper[4820]: I0221 08:05:46.091211 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 21 08:05:46 crc kubenswrapper[4820]: I0221 08:05:46.251578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"aa03e2c65aae9f5463c21f0cd927ddaa99517136c5633ab70c3b64a8481658c8"} Feb 21 08:05:47 crc kubenswrapper[4820]: I0221 08:05:47.263955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd"} Feb 21 08:05:48 crc kubenswrapper[4820]: I0221 08:05:48.272516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d"} Feb 21 08:05:53 crc kubenswrapper[4820]: I0221 08:05:53.696763 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:05:53 crc kubenswrapper[4820]: E0221 08:05:53.698367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:05 crc kubenswrapper[4820]: I0221 08:06:05.702711 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:05 crc kubenswrapper[4820]: E0221 08:06:05.703578 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:17 crc kubenswrapper[4820]: I0221 08:06:17.697052 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:17 crc kubenswrapper[4820]: E0221 08:06:17.697887 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.517539 4820 generic.go:334] "Generic (PLEG): container finished" podID="57d094d7-d5d2-4276-b0c2-cb98a15c0c3d" containerID="eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d" exitCode=0 Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.517599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerDied","Data":"eb6b6434380a3c811bc2de3820828054bcc1c76954ea9e4e5fa1b5301448681d"} Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.519289 4820 generic.go:334] "Generic (PLEG): container finished" podID="8195e98f-70c8-4758-9d0a-e3a95de45075" containerID="e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd" exitCode=0 Feb 21 08:06:19 crc kubenswrapper[4820]: I0221 08:06:19.519321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerDied","Data":"e4ced02ea21f015b77839446433655fb65512cfbdf70f5e3b21abb0747f5babd"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.527841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8195e98f-70c8-4758-9d0a-e3a95de45075","Type":"ContainerStarted","Data":"da87fcd901c3ff0404185ecadc46ae54cb6f88c3fb2093d35d1985eeabbc62a7"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.528626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.529222 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57d094d7-d5d2-4276-b0c2-cb98a15c0c3d","Type":"ContainerStarted","Data":"1475dfb713358b3c39a4e549c4134731da0735fb3d439b6da007f0e445f57e7e"} Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.529493 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.555548 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.555529606 podStartE2EDuration="36.555529606s" podCreationTimestamp="2026-02-21 08:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:06:20.547762115 +0000 UTC m=+4755.580846313" watchObservedRunningTime="2026-02-21 08:06:20.555529606 +0000 UTC m=+4755.588613804" Feb 21 08:06:20 crc kubenswrapper[4820]: I0221 08:06:20.587352 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.587332294 podStartE2EDuration="35.587332294s" podCreationTimestamp="2026-02-21 08:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:06:20.583396257 +0000 UTC m=+4755.616480475" watchObservedRunningTime="2026-02-21 08:06:20.587332294 +0000 UTC m=+4755.620416492" Feb 21 08:06:30 crc kubenswrapper[4820]: I0221 08:06:30.697418 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:30 crc kubenswrapper[4820]: E0221 08:06:30.698306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:34 crc kubenswrapper[4820]: I0221 08:06:34.712445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 21 08:06:35 crc kubenswrapper[4820]: I0221 08:06:35.638445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.239420 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.240525 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.243073 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.249429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.368753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.470452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.493607 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"mariadb-client\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " pod="openstack/mariadb-client" Feb 21 08:06:38 crc kubenswrapper[4820]: I0221 08:06:38.577273 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.062085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.073182 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:06:39 crc kubenswrapper[4820]: I0221 08:06:39.660916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerStarted","Data":"2c64262dba04eb03748f3b009a919554604fe4b3b0a2b587e5806fe9484db531"} Feb 21 08:06:40 crc kubenswrapper[4820]: I0221 08:06:40.669659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerStarted","Data":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} Feb 21 08:06:40 crc kubenswrapper[4820]: I0221 08:06:40.686593 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.215972114 podStartE2EDuration="2.686564979s" podCreationTimestamp="2026-02-21 08:06:38 +0000 UTC" firstStartedPulling="2026-02-21 08:06:39.072957091 +0000 UTC m=+4774.106041289" lastFinishedPulling="2026-02-21 08:06:39.543549916 +0000 UTC m=+4774.576634154" observedRunningTime="2026-02-21 08:06:40.68397483 +0000 UTC m=+4775.717059068" watchObservedRunningTime="2026-02-21 08:06:40.686564979 +0000 UTC m=+4775.719649197" Feb 21 08:06:41 crc kubenswrapper[4820]: I0221 08:06:41.697387 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:41 crc kubenswrapper[4820]: E0221 08:06:41.697611 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.153090 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.153914 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" containerID="cri-o://ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" gracePeriod=30 Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.620930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.692396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") pod \"b79bec21-0f86-4055-b9f6-09e36fca39d7\" (UID: \"b79bec21-0f86-4055-b9f6-09e36fca39d7\") " Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.697117 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:06:52 crc kubenswrapper[4820]: E0221 08:06:52.697478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.698286 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm" (OuterVolumeSpecName: "kube-api-access-gpqqm") pod "b79bec21-0f86-4055-b9f6-09e36fca39d7" (UID: "b79bec21-0f86-4055-b9f6-09e36fca39d7"). InnerVolumeSpecName "kube-api-access-gpqqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758322 4820 generic.go:334] "Generic (PLEG): container finished" podID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" exitCode=143 Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758382 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerDied","Data":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b79bec21-0f86-4055-b9f6-09e36fca39d7","Type":"ContainerDied","Data":"2c64262dba04eb03748f3b009a919554604fe4b3b0a2b587e5806fe9484db531"} Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758482 4820 scope.go:117] "RemoveContainer" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.758397 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.780073 4820 scope.go:117] "RemoveContainer" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: E0221 08:06:52.781651 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": container with ID starting with ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876 not found: ID does not exist" containerID="ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.781706 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876"} err="failed to get container status \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": rpc error: code = NotFound desc = could not find container \"ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876\": container with ID starting with ded1eb038852f8228dc5ac0ff52dfa9d853dba3e6bfe17267e12805cd0982876 not found: ID does not exist" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.795576 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqqm\" (UniqueName: \"kubernetes.io/projected/b79bec21-0f86-4055-b9f6-09e36fca39d7-kube-api-access-gpqqm\") on node \"crc\" DevicePath \"\"" Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.797931 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:52 crc kubenswrapper[4820]: I0221 08:06:52.802798 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:06:53 crc kubenswrapper[4820]: I0221 08:06:53.707176 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" path="/var/lib/kubelet/pods/b79bec21-0f86-4055-b9f6-09e36fca39d7/volumes" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.859765 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:01 crc kubenswrapper[4820]: E0221 08:07:01.860909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.860924 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.861099 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79bec21-0f86-4055-b9f6-09e36fca39d7" containerName="mariadb-client" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.862830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.866088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916850 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:01 crc kubenswrapper[4820]: I0221 08:07:01.916915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.017673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.018040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.018082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.048298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"community-operators-k7zm4\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.203349 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.662006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:02 crc kubenswrapper[4820]: I0221 08:07:02.832804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerStarted","Data":"e00deb2d54d453a338e2ace296a8af912f720f02a775533b7be2f4812bc8f721"} Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.696936 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:03 crc kubenswrapper[4820]: E0221 08:07:03.697166 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.841284 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" exitCode=0 Feb 21 08:07:03 crc kubenswrapper[4820]: I0221 08:07:03.841338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85"} Feb 21 08:07:04 crc kubenswrapper[4820]: I0221 08:07:04.849978 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" exitCode=0 Feb 21 08:07:04 crc kubenswrapper[4820]: I0221 08:07:04.850448 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e"} Feb 21 08:07:05 crc kubenswrapper[4820]: I0221 08:07:05.859481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerStarted","Data":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} Feb 21 08:07:05 crc kubenswrapper[4820]: I0221 08:07:05.879740 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k7zm4" podStartSLOduration=3.4744456980000002 podStartE2EDuration="4.879723928s" podCreationTimestamp="2026-02-21 08:07:01 +0000 UTC" firstStartedPulling="2026-02-21 08:07:03.84351183 +0000 UTC m=+4798.876596028" lastFinishedPulling="2026-02-21 08:07:05.24879007 +0000 UTC m=+4800.281874258" observedRunningTime="2026-02-21 08:07:05.876432268 +0000 UTC m=+4800.909516476" watchObservedRunningTime="2026-02-21 08:07:05.879723928 +0000 UTC m=+4800.912808126" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.652715 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.654545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.679780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.683066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.781077 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.781594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.782319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:06 crc kubenswrapper[4820]: I0221 08:07:06.801382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"certified-operators-gxqfq\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.002081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.287013 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875669 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" exitCode=0 Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875837 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a"} Feb 21 08:07:07 crc kubenswrapper[4820]: I0221 08:07:07.875980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"f60cef1898ab6a9f09adbdcbca2c7577c703d38c3f92d6706838ae586d0ae809"} Feb 21 08:07:08 crc kubenswrapper[4820]: I0221 08:07:08.887093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} Feb 21 08:07:09 crc kubenswrapper[4820]: I0221 08:07:09.896215 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" exitCode=0 Feb 21 08:07:09 crc kubenswrapper[4820]: I0221 08:07:09.896284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} Feb 21 08:07:10 crc kubenswrapper[4820]: I0221 08:07:10.905867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerStarted","Data":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} Feb 21 08:07:10 crc kubenswrapper[4820]: I0221 08:07:10.932128 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxqfq" podStartSLOduration=2.527707543 podStartE2EDuration="4.93210583s" podCreationTimestamp="2026-02-21 08:07:06 +0000 UTC" firstStartedPulling="2026-02-21 08:07:07.877435844 +0000 UTC m=+4802.910520042" lastFinishedPulling="2026-02-21 08:07:10.281834131 +0000 UTC m=+4805.314918329" observedRunningTime="2026-02-21 08:07:10.92578413 +0000 UTC m=+4805.958868328" watchObservedRunningTime="2026-02-21 08:07:10.93210583 +0000 UTC m=+4805.965190038" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.204274 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.205229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.240541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:12 crc kubenswrapper[4820]: I0221 08:07:12.976020 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:13 crc kubenswrapper[4820]: I0221 08:07:13.243710 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:14 crc kubenswrapper[4820]: I0221 08:07:14.931073 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k7zm4" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" containerID="cri-o://aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" gracePeriod=2 Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.316388 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.410271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") pod \"d88cd1ee-a295-429d-9e88-133376560585\" (UID: \"d88cd1ee-a295-429d-9e88-133376560585\") " Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.411106 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities" (OuterVolumeSpecName: "utilities") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.415259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks" (OuterVolumeSpecName: "kube-api-access-2xmks") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "kube-api-access-2xmks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.512557 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmks\" (UniqueName: \"kubernetes.io/projected/d88cd1ee-a295-429d-9e88-133376560585-kube-api-access-2xmks\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.512593 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.868278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d88cd1ee-a295-429d-9e88-133376560585" (UID: "d88cd1ee-a295-429d-9e88-133376560585"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.919040 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88cd1ee-a295-429d-9e88-133376560585-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939552 4820 generic.go:334] "Generic (PLEG): container finished" podID="d88cd1ee-a295-429d-9e88-133376560585" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" exitCode=0 Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7zm4" event={"ID":"d88cd1ee-a295-429d-9e88-133376560585","Type":"ContainerDied","Data":"e00deb2d54d453a338e2ace296a8af912f720f02a775533b7be2f4812bc8f721"} Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939650 4820 scope.go:117] "RemoveContainer" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.939693 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7zm4" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.960349 4820 scope.go:117] "RemoveContainer" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.975361 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.980855 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k7zm4"] Feb 21 08:07:15 crc kubenswrapper[4820]: I0221 08:07:15.985519 4820 scope.go:117] "RemoveContainer" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.009666 4820 scope.go:117] "RemoveContainer" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.010103 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": container with ID starting with aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908 not found: ID does not exist" containerID="aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010134 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908"} err="failed to get container status \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": rpc error: code = NotFound desc = could not find container \"aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908\": container with ID starting with aa31a68342cc25029911e6fc6e26404c84521d55850fb153ce5562a4abe21908 not found: ID does not exist" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010157 4820 scope.go:117] "RemoveContainer" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.010621 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": container with ID starting with e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e not found: ID does not exist" containerID="e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010660 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e"} err="failed to get container status \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": rpc error: code = NotFound desc = could not find container \"e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e\": container with ID starting with e28e37ead92dd7b41f7e488f562d4cc927865a8f35cf7576b76f7d525a99070e not found: ID does not exist" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.010685 4820 scope.go:117] "RemoveContainer" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: E0221 08:07:16.011001 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": container with ID starting with 1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85 not found: ID does not exist" containerID="1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85" Feb 21 08:07:16 crc kubenswrapper[4820]: I0221 08:07:16.011027 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85"} err="failed to get container status \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": rpc error: code = NotFound desc = could not find container \"1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85\": container with ID starting with 1fff9cb5259f891e550dc0676e1f406e8f91691ca23a2d997887fab8405f4a85 not found: ID does not exist" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.003004 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.003050 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.043389 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.716918 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88cd1ee-a295-429d-9e88-133376560585" path="/var/lib/kubelet/pods/d88cd1ee-a295-429d-9e88-133376560585/volumes" Feb 21 08:07:17 crc kubenswrapper[4820]: I0221 08:07:17.997580 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:18 crc kubenswrapper[4820]: I0221 08:07:18.643222 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:18 crc kubenswrapper[4820]: I0221 08:07:18.696555 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:18 crc kubenswrapper[4820]: E0221 08:07:18.696815 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:19 crc kubenswrapper[4820]: I0221 08:07:19.965479 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxqfq" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" containerID="cri-o://47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" gracePeriod=2 Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.870409 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897348 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.897415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") pod \"025c1a7f-4a20-4175-b9a4-b21563a944fb\" (UID: \"025c1a7f-4a20-4175-b9a4-b21563a944fb\") " Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.898266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities" (OuterVolumeSpecName: "utilities") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.904065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d" (OuterVolumeSpecName: "kube-api-access-mhr6d") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "kube-api-access-mhr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.954581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "025c1a7f-4a20-4175-b9a4-b21563a944fb" (UID: "025c1a7f-4a20-4175-b9a4-b21563a944fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975643 4820 generic.go:334] "Generic (PLEG): container finished" podID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" exitCode=0 Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975716 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqfq" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqfq" event={"ID":"025c1a7f-4a20-4175-b9a4-b21563a944fb","Type":"ContainerDied","Data":"f60cef1898ab6a9f09adbdcbca2c7577c703d38c3f92d6706838ae586d0ae809"} Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.975755 4820 scope.go:117] "RemoveContainer" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:20 crc kubenswrapper[4820]: I0221 08:07:20.996909 4820 scope.go:117] "RemoveContainer" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004890 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004934 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025c1a7f-4a20-4175-b9a4-b21563a944fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.004985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhr6d\" (UniqueName: \"kubernetes.io/projected/025c1a7f-4a20-4175-b9a4-b21563a944fb-kube-api-access-mhr6d\") on node \"crc\" DevicePath \"\"" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.016918 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.024266 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxqfq"] Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.038445 4820 scope.go:117] "RemoveContainer" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.052795 4820 scope.go:117] "RemoveContainer" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.053177 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": container with ID starting with 47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb not found: ID does not exist" containerID="47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053261 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb"} err="failed to get container status \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": rpc error: code = NotFound desc = could not find container \"47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb\": container with ID starting with 47fa81a1c6328c579db227e0c4b32e475e7c6e8d13db3074c54cea1d5918d1bb not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053311 4820 scope.go:117] "RemoveContainer" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.053702 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": container with ID starting with 2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961 not found: ID does not exist" containerID="2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053740 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961"} err="failed to get container status \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": rpc error: code = NotFound desc = could not find container \"2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961\": container with ID starting with 2431c10241ec4ce3d2f30d395974874d041828b18b9727ec820874ab4d04e961 not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.053768 4820 scope.go:117] "RemoveContainer" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: E0221 08:07:21.054138 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": container with ID starting with 5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a not found: ID does not exist" containerID="5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.054221 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a"} err="failed to get container status \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": rpc error: code = NotFound desc = could not find container \"5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a\": container with ID starting with 5831a3376a979fd133fdbdd4958ca7ea98e10c8d94be523bf7c38006bc179c8a not found: ID does not exist" Feb 21 08:07:21 crc kubenswrapper[4820]: I0221 08:07:21.705571 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" path="/var/lib/kubelet/pods/025c1a7f-4a20-4175-b9a4-b21563a944fb/volumes" Feb 21 08:07:32 crc kubenswrapper[4820]: I0221 08:07:32.696795 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:32 crc kubenswrapper[4820]: E0221 08:07:32.697483 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:07:47 crc kubenswrapper[4820]: I0221 08:07:47.697043 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:07:47 crc kubenswrapper[4820]: E0221 08:07:47.697846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:08:00 crc kubenswrapper[4820]: I0221 08:08:00.696207 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:08:00 crc kubenswrapper[4820]: E0221 08:08:00.697043 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:08:14 crc kubenswrapper[4820]: I0221 08:08:14.698017 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:08:15 crc kubenswrapper[4820]: I0221 08:08:15.414119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} Feb 21 08:08:18 crc kubenswrapper[4820]: I0221 08:08:18.147542 4820 scope.go:117] "RemoveContainer" containerID="abe2f0576407f2db8b453915dbe4741e83b046e73205be5c6cbf759bce72a106" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.853081 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854075 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854093 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854101 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854120 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854127 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854141 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854147 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854164 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854172 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="extract-utilities" Feb 21 08:09:51 crc kubenswrapper[4820]: E0221 08:09:51.854189 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854197 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="extract-content" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854426 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="025c1a7f-4a20-4175-b9a4-b21563a944fb" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854454 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88cd1ee-a295-429d-9e88-133376560585" containerName="registry-server" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.854909 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.857547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.883112 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.970199 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:51 crc kubenswrapper[4820]: I0221 08:09:51.970542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.072024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.072093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.074791 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.074824 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de5af2121fd419b811e8abb08129b759e3658785e5d5b3364ba51c94bc9f7907/globalmount\"" pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.093161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.097512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"mariadb-copy-data\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.177303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 08:09:52 crc kubenswrapper[4820]: I0221 08:09:52.655226 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.104113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerStarted","Data":"9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49"} Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.104179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerStarted","Data":"707475d0c6275ed4702ec4fee55d65d5c005c4843fb7b9c91608c48f928cd4c6"} Feb 21 08:09:53 crc kubenswrapper[4820]: I0221 08:09:53.130366 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.130347566 podStartE2EDuration="3.130347566s" podCreationTimestamp="2026-02-21 08:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:09:53.122622206 +0000 UTC m=+4968.155706424" watchObservedRunningTime="2026-02-21 08:09:53.130347566 +0000 UTC m=+4968.163431764" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.616589 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.618012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.622543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.740355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.842399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.858645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"mariadb-client\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " pod="openstack/mariadb-client" Feb 21 08:09:55 crc kubenswrapper[4820]: I0221 08:09:55.942391 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:56 crc kubenswrapper[4820]: I0221 08:09:56.371429 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.129710 4820 generic.go:334] "Generic (PLEG): container finished" podID="db70ca85-292a-47ed-9028-c23b0e963849" containerID="7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5" exitCode=0 Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.129774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db70ca85-292a-47ed-9028-c23b0e963849","Type":"ContainerDied","Data":"7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5"} Feb 21 08:09:57 crc kubenswrapper[4820]: I0221 08:09:57.130369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db70ca85-292a-47ed-9028-c23b0e963849","Type":"ContainerStarted","Data":"3ce6c6e87eb577c68cb66308708ff0ba300ad70780045e921ab8ece0b7911121"} Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.426120 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.470781 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_db70ca85-292a-47ed-9028-c23b0e963849/mariadb-client/0.log" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.495512 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.500861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.589725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") pod \"db70ca85-292a-47ed-9028-c23b0e963849\" (UID: \"db70ca85-292a-47ed-9028-c23b0e963849\") " Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.596585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh" (OuterVolumeSpecName: "kube-api-access-rcsqh") pod "db70ca85-292a-47ed-9028-c23b0e963849" (UID: "db70ca85-292a-47ed-9028-c23b0e963849"). InnerVolumeSpecName "kube-api-access-rcsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.616867 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: E0221 08:09:58.617136 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617344 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="db70ca85-292a-47ed-9028-c23b0e963849" containerName="mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.617807 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.625808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.691222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.691432 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsqh\" (UniqueName: \"kubernetes.io/projected/db70ca85-292a-47ed-9028-c23b0e963849-kube-api-access-rcsqh\") on node \"crc\" DevicePath \"\"" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.792433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.807954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"mariadb-client\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " pod="openstack/mariadb-client" Feb 21 08:09:58 crc kubenswrapper[4820]: I0221 08:09:58.950183 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.144185 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce6c6e87eb577c68cb66308708ff0ba300ad70780045e921ab8ece0b7911121" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.144272 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.160355 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="db70ca85-292a-47ed-9028-c23b0e963849" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.340807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:09:59 crc kubenswrapper[4820]: W0221 08:09:59.343345 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ffe630a_95af_4704_b580_f934102c5c4f.slice/crio-85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1 WatchSource:0}: Error finding container 85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1: Status 404 returned error can't find the container with id 85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1 Feb 21 08:09:59 crc kubenswrapper[4820]: I0221 08:09:59.705337 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db70ca85-292a-47ed-9028-c23b0e963849" path="/var/lib/kubelet/pods/db70ca85-292a-47ed-9028-c23b0e963849/volumes" Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153690 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ffe630a-95af-4704-b580-f934102c5c4f" containerID="8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035" exitCode=0 Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4ffe630a-95af-4704-b580-f934102c5c4f","Type":"ContainerDied","Data":"8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035"} Feb 21 08:10:00 crc kubenswrapper[4820]: I0221 08:10:00.153766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4ffe630a-95af-4704-b580-f934102c5c4f","Type":"ContainerStarted","Data":"85b70e40adc79fe00b1741b01b854f7c3dcd0e56f4327d2862bfe4933035daa1"} Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.521290 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.539561 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_4ffe630a-95af-4704-b580-f934102c5c4f/mariadb-client/0.log" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.566775 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.574980 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.641540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") pod \"4ffe630a-95af-4704-b580-f934102c5c4f\" (UID: \"4ffe630a-95af-4704-b580-f934102c5c4f\") " Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.647925 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4" (OuterVolumeSpecName: "kube-api-access-llfk4") pod "4ffe630a-95af-4704-b580-f934102c5c4f" (UID: "4ffe630a-95af-4704-b580-f934102c5c4f"). InnerVolumeSpecName "kube-api-access-llfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.718762 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" path="/var/lib/kubelet/pods/4ffe630a-95af-4704-b580-f934102c5c4f/volumes" Feb 21 08:10:01 crc kubenswrapper[4820]: I0221 08:10:01.744090 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llfk4\" (UniqueName: \"kubernetes.io/projected/4ffe630a-95af-4704-b580-f934102c5c4f-kube-api-access-llfk4\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:02 crc kubenswrapper[4820]: I0221 08:10:02.170005 4820 scope.go:117] "RemoveContainer" containerID="8ac70bfcb050a56388f6c954dd0e7f7f12588edc0c501a75ff209759990a1035" Feb 21 08:10:02 crc kubenswrapper[4820]: I0221 08:10:02.170085 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.420099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:30 crc kubenswrapper[4820]: E0221 08:10:30.421074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421252 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffe630a-95af-4704-b580-f934102c5c4f" containerName="mariadb-client" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.421996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.424699 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gxlth" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425056 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425230 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.425603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.432019 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.450844 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.452484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.457972 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.466295 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.467509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.504299 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.520638 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.527671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.632052 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.632123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633496 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.633946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.634004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.634034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.635171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.636150 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0292096a-9b13-475a-971c-cf4dae1a3f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.637213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.644319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.644973 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.645008 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7f5a91cf66baa124de39703812a65cbead766845674401f574f4477cbb5ca47/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.655994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.660728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0292096a-9b13-475a-971c-cf4dae1a3f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.668900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26cpl\" (UniqueName: \"kubernetes.io/projected/0292096a-9b13-475a-971c-cf4dae1a3f8f-kube-api-access-26cpl\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.686957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf9fe0e5-9b23-4f74-b6f9-f4bbe1141bbd\") pod \"ovsdbserver-nb-0\" (UID: \"0292096a-9b13-475a-971c-cf4dae1a3f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735704 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735724 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735745 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.735929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736028 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736070 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.736746 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738157 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-config\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738674 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738700 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/17960b22741a80753ff36376b4cd4e9eaaca50bec0a188a2737fcad06b3ddbc4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.738985 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739006 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d716636993c182d97458935b224f5a7dc8e62f8801baf3c286a46a0042ece6e3/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-config\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.739944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.740285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.747594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.750987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7377f38-4907-4b1d-a339-f274c122ef5c-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.753612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759542 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.759623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7377f38-4907-4b1d-a339-f274c122ef5c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.761074 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrwd\" (UniqueName: \"kubernetes.io/projected/1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37-kube-api-access-jfrwd\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.763226 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwtl\" (UniqueName: \"kubernetes.io/projected/c7377f38-4907-4b1d-a339-f274c122ef5c-kube-api-access-9fwtl\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.780302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f9ac68d-da58-46ef-8c95-25977043006f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f9ac68d-da58-46ef-8c95-25977043006f\") pod \"ovsdbserver-nb-2\" (UID: \"c7377f38-4907-4b1d-a339-f274c122ef5c\") " pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.782450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abbc133d-c05d-4775-b708-ea6b12ca5f07\") pod \"ovsdbserver-nb-1\" (UID: \"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37\") " pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.789609 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.821106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:30 crc kubenswrapper[4820]: I0221 08:10:30.838152 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.323206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.375468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"26fc385354a329c447d5c1a9398ba80883737fc2bdac13520c6ff94a124a2852"} Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.421341 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 21 08:10:31 crc kubenswrapper[4820]: W0221 08:10:31.424971 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7377f38_4907_4b1d_a339_f274c122ef5c.slice/crio-27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724 WatchSource:0}: Error finding container 27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724: Status 404 returned error can't find the container with id 27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724 Feb 21 08:10:31 crc kubenswrapper[4820]: I0221 08:10:31.967078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 21 08:10:31 crc kubenswrapper[4820]: W0221 08:10:31.978936 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff1c87b_f0e7_4917_a5ce_291ff2b6bd37.slice/crio-ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36 WatchSource:0}: Error finding container ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36: Status 404 returned error can't find the container with id ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36 Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.393696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"ba87bc9307549432591bfb0ec24a9b9fd35ac41d5f4d37bbdd4ea03186cb1e36"} Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.395942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"27d636d249b234d2b48bb73ca243a6ba6d2e98c57f91869e40be3d34a5e0d724"} Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.673068 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.684035 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.687199 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.690380 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.692340 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.695217 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.695334 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-g2nsk" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.711585 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.715712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.724275 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.725889 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.731780 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.744774 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777746 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.777989 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778540 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.778567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879882 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879929 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.879979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880291 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.880988 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.881096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-config\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.881493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.882005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a6723b-ff49-4d22-a6cd-1e9509165729-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.883554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.883974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcf6ab13-da71-49ec-b2dc-27602f1a953f-config\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.884401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.887445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.887791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-config\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888806 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.888835 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d6601ff4899818f7c8d8ba59347d6f94eb8d73b6c448990ea8ab0ab7adb5c28/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.890054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.890061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaa256c-7102-4960-ade0-b903b29b2716-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.891300 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.891338 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/963b89af96c0185a8f1cccdb2c155d83efe3a56eed47acaa45859e65a7377fb3/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.892559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aaa256c-7102-4960-ade0-b903b29b2716-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.893276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a6723b-ff49-4d22-a6cd-1e9509165729-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.895716 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.895761 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24806ec7245ec04bce8cf628211bfb0c56c08782cce785ca4ab86ba4e6fee2a6/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.897507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbjp\" (UniqueName: \"kubernetes.io/projected/dcf6ab13-da71-49ec-b2dc-27602f1a953f-kube-api-access-4rbjp\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.899623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf6ab13-da71-49ec-b2dc-27602f1a953f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.904416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8vc\" (UniqueName: \"kubernetes.io/projected/66a6723b-ff49-4d22-a6cd-1e9509165729-kube-api-access-fq8vc\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.911218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspnf\" (UniqueName: \"kubernetes.io/projected/6aaa256c-7102-4960-ade0-b903b29b2716-kube-api-access-tspnf\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.928574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45a6e94b-3bb4-4e4c-a5a1-1cd9a2c69d25\") pod \"ovsdbserver-sb-0\" (UID: \"6aaa256c-7102-4960-ade0-b903b29b2716\") " pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.939654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1364d46-18f3-48bc-b8ee-6f9091350cdc\") pod \"ovsdbserver-sb-2\" (UID: \"dcf6ab13-da71-49ec-b2dc-27602f1a953f\") " pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:32 crc kubenswrapper[4820]: I0221 08:10:32.956164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9df59340-c38b-4498-98e8-cfb1627595fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9df59340-c38b-4498-98e8-cfb1627595fd\") pod \"ovsdbserver-sb-1\" (UID: \"66a6723b-ff49-4d22-a6cd-1e9509165729\") " pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.030154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.045374 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.057225 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:33 crc kubenswrapper[4820]: I0221 08:10:33.724985 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 21 08:10:34 crc kubenswrapper[4820]: I0221 08:10:34.152800 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 21 08:10:34 crc kubenswrapper[4820]: I0221 08:10:34.630220 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 21 08:10:35 crc kubenswrapper[4820]: W0221 08:10:35.207476 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aaa256c_7102_4960_ade0_b903b29b2716.slice/crio-688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956 WatchSource:0}: Error finding container 688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956: Status 404 returned error can't find the container with id 688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956 Feb 21 08:10:35 crc kubenswrapper[4820]: W0221 08:10:35.212799 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66a6723b_ff49_4d22_a6cd_1e9509165729.slice/crio-7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082 WatchSource:0}: Error finding container 7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082: Status 404 returned error can't find the container with id 7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082 Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.416126 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"7bf47c3fcde87706eebe4eefe0ed5aea24dd8116cc3f18f0a2c2017e24c53082"} Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.417351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"16f4889cd26e8affb691bc0a5c22d09707ede4ed9b34c5628c2c3b7fbed1a752"} Feb 21 08:10:35 crc kubenswrapper[4820]: I0221 08:10:35.418306 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"688d17db2c34f907e1d9ff58255ca280d3444c892a70461d042d88f70e92a956"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.431049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"39546c24134e65b25010dd854838cabe029c35d7db634514ad69460ec908ef36"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.431433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c7377f38-4907-4b1d-a339-f274c122ef5c","Type":"ContainerStarted","Data":"f3039c9e08abc5ed462ca2195269e3257621d8be1483eecd52c05059d075ed73"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.435616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"cbeda9872c6653a713f344b7ab3c51e36f8001b85afa7b7c7864bd68cd59377d"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.437938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"449db6b634a25506f0e516a9f39bd0b0c359299187d2f1dd4aedc0fb9b5dd721"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.437988 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0292096a-9b13-475a-971c-cf4dae1a3f8f","Type":"ContainerStarted","Data":"18f1601d2133b0f49ae8ca812831181b495ccb6ae74c12b6d27fae63ed7e5425"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.439875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"9b17ebb5989ecd7d75732d0cb6d13c172799a8fe0010291d8003987d51c7a19e"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.441752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"08b3d21c8cc1b6778a5d54db6251c8627fe07b5ee828ebc5ba3d5dfde3538ef9"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.441783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37","Type":"ContainerStarted","Data":"77a70d21a868d8e957b8342988a314baf00c861f744a8676bb68fc451deb303a"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.446082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"23d76b3087435cd4d210ee2a42b85e548a2cba9f25800a8e4bb6ea4b93a38ca0"} Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.466927 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.417055103 podStartE2EDuration="7.46690681s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.427642824 +0000 UTC m=+5006.460727022" lastFinishedPulling="2026-02-21 08:10:35.477494531 +0000 UTC m=+5010.510578729" observedRunningTime="2026-02-21 08:10:36.462664026 +0000 UTC m=+5011.495748234" watchObservedRunningTime="2026-02-21 08:10:36.46690681 +0000 UTC m=+5011.499991008" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.488736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.001065346 podStartE2EDuration="7.488719671s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.980567765 +0000 UTC m=+5007.013651963" lastFinishedPulling="2026-02-21 08:10:35.46822208 +0000 UTC m=+5010.501306288" observedRunningTime="2026-02-21 08:10:36.483906741 +0000 UTC m=+5011.516990939" watchObservedRunningTime="2026-02-21 08:10:36.488719671 +0000 UTC m=+5011.521803869" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.512031 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.3056676019999998 podStartE2EDuration="7.512016472s" podCreationTimestamp="2026-02-21 08:10:29 +0000 UTC" firstStartedPulling="2026-02-21 08:10:31.327977338 +0000 UTC m=+5006.361061536" lastFinishedPulling="2026-02-21 08:10:35.534326208 +0000 UTC m=+5010.567410406" observedRunningTime="2026-02-21 08:10:36.505657319 +0000 UTC m=+5011.538741517" watchObservedRunningTime="2026-02-21 08:10:36.512016472 +0000 UTC m=+5011.545100660" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.790402 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.821646 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:36 crc kubenswrapper[4820]: I0221 08:10:36.838316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.455396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"66a6723b-ff49-4d22-a6cd-1e9509165729","Type":"ContainerStarted","Data":"5a4b29aa4b5339175331fec948514136545de1c291628a73d5b27d0a58a5536d"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.459110 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"dcf6ab13-da71-49ec-b2dc-27602f1a953f","Type":"ContainerStarted","Data":"e805872eea6169cb25f75792fe1723de590d868a6f22b84e5414513c50c1f7ed"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.462435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6aaa256c-7102-4960-ade0-b903b29b2716","Type":"ContainerStarted","Data":"27f9737f693fe2927ce925215dcf1748116297134a6cb0791814257971733444"} Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.480207 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=5.568446358 podStartE2EDuration="6.480189157s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.215723888 +0000 UTC m=+5010.248808096" lastFinishedPulling="2026-02-21 08:10:36.127466697 +0000 UTC m=+5011.160550895" observedRunningTime="2026-02-21 08:10:37.473862826 +0000 UTC m=+5012.506947024" watchObservedRunningTime="2026-02-21 08:10:37.480189157 +0000 UTC m=+5012.513273355" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.497501 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=5.584201405 podStartE2EDuration="6.497484175s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.211933045 +0000 UTC m=+5010.245017243" lastFinishedPulling="2026-02-21 08:10:36.125215825 +0000 UTC m=+5011.158300013" observedRunningTime="2026-02-21 08:10:37.492829359 +0000 UTC m=+5012.525913567" watchObservedRunningTime="2026-02-21 08:10:37.497484175 +0000 UTC m=+5012.530568373" Feb 21 08:10:37 crc kubenswrapper[4820]: I0221 08:10:37.514695 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.600945627 podStartE2EDuration="6.51467967s" podCreationTimestamp="2026-02-21 08:10:31 +0000 UTC" firstStartedPulling="2026-02-21 08:10:35.211890664 +0000 UTC m=+5010.244974862" lastFinishedPulling="2026-02-21 08:10:36.125624707 +0000 UTC m=+5011.158708905" observedRunningTime="2026-02-21 08:10:37.512411739 +0000 UTC m=+5012.545495957" watchObservedRunningTime="2026-02-21 08:10:37.51467967 +0000 UTC m=+5012.547763868" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.030605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.045483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:38 crc kubenswrapper[4820]: I0221 08:10:38.059520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.030783 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.046188 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.057636 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.075685 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.095730 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.114897 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.824479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.824852 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.864587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.864925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.896115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:39 crc kubenswrapper[4820]: I0221 08:10:39.896559 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.570787 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.573613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.590250 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.767694 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.769118 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.774077 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.779163 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.828814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.930647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.931796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:40 crc kubenswrapper[4820]: I0221 08:10:40.948113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"dnsmasq-dns-84fbbffdc5-bsfmf\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.094388 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.501308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:41 crc kubenswrapper[4820]: W0221 08:10:41.513122 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3911e64b_266d_4c66_9aec_4e26cec73c06.slice/crio-957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35 WatchSource:0}: Error finding container 957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35: Status 404 returned error can't find the container with id 957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35 Feb 21 08:10:41 crc kubenswrapper[4820]: I0221 08:10:41.541047 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerStarted","Data":"957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35"} Feb 21 08:10:42 crc kubenswrapper[4820]: I0221 08:10:42.548764 4820 generic.go:334] "Generic (PLEG): container finished" podID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" exitCode=0 Feb 21 08:10:42 crc kubenswrapper[4820]: I0221 08:10:42.548900 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f"} Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.067846 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.081658 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.099921 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.353153 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.379415 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.392509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.394589 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.399664 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469263 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.469605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.561606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerStarted","Data":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.562779 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.570924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.571868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.572276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.578561 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" podStartSLOduration=3.578547519 podStartE2EDuration="3.578547519s" podCreationTimestamp="2026-02-21 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:43.578018674 +0000 UTC m=+5018.611102882" watchObservedRunningTime="2026-02-21 08:10:43.578547519 +0000 UTC m=+5018.611631727" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.589285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"dnsmasq-dns-59c565c565-4g68w\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.714280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.859388 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:10:43 crc kubenswrapper[4820]: I0221 08:10:43.859693 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.189152 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571132 4820 generic.go:334] "Generic (PLEG): container finished" podID="623dbf87-d39f-4026-9aa5-72d52508407b" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" exitCode=0 Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6"} Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerStarted","Data":"f79446424404530462274952dece2308d0d1ba04fb18b87302d89305eb07556f"} Feb 21 08:10:44 crc kubenswrapper[4820]: I0221 08:10:44.571787 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" containerID="cri-o://8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" gracePeriod=10 Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.013298 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.116958 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117351 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.117428 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") pod \"3911e64b-266d-4c66-9aec-4e26cec73c06\" (UID: \"3911e64b-266d-4c66-9aec-4e26cec73c06\") " Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.121347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4" (OuterVolumeSpecName: "kube-api-access-dh4h4") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "kube-api-access-dh4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.153917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config" (OuterVolumeSpecName: "config") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.154654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.156181 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3911e64b-266d-4c66-9aec-4e26cec73c06" (UID: "3911e64b-266d-4c66-9aec-4e26cec73c06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219540 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh4h4\" (UniqueName: \"kubernetes.io/projected/3911e64b-266d-4c66-9aec-4e26cec73c06-kube-api-access-dh4h4\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219581 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219591 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.219603 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3911e64b-266d-4c66-9aec-4e26cec73c06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.430688 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.431074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="init" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431096 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="init" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.431112 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431119 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431275 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerName="dnsmasq-dns" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.431896 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.434801 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.445914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.525803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581371 4820 generic.go:334] "Generic (PLEG): container finished" podID="3911e64b-266d-4c66-9aec-4e26cec73c06" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" exitCode=0 Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581447 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84fbbffdc5-bsfmf" event={"ID":"3911e64b-266d-4c66-9aec-4e26cec73c06","Type":"ContainerDied","Data":"957d9ad0a6514e8d9900f14340121c423436536ca7bd144e4d0df81d0dcf5a35"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.581525 4820 scope.go:117] "RemoveContainer" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.585491 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerStarted","Data":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.585665 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.607555 4820 scope.go:117] "RemoveContainer" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.614382 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c565c565-4g68w" podStartSLOduration=2.614332501 podStartE2EDuration="2.614332501s" podCreationTimestamp="2026-02-21 08:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:45.61136091 +0000 UTC m=+5020.644445118" watchObservedRunningTime="2026-02-21 08:10:45.614332501 +0000 UTC m=+5020.647416689" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.628503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.637889 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638310 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638424 4820 scope.go:117] "RemoveContainer" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638415 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e5c2adb17362cb38b9613e55900aac4eb2dcd2074de3a1084943e2c54cd00e8/globalmount\"" pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.638851 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": container with ID starting with 8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626 not found: ID does not exist" containerID="8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638893 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626"} err="failed to get container status \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": rpc error: code = NotFound desc = could not find container \"8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626\": container with ID starting with 8dcc15ea338766979e677854e5b2375eaa6eaf2f18dbe0bafdad2e0310025626 not found: ID does not exist" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.638921 4820 scope.go:117] "RemoveContainer" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: E0221 08:10:45.639490 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": container with ID starting with e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f not found: ID does not exist" containerID="e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.639549 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f"} err="failed to get container status \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": rpc error: code = NotFound desc = could not find container \"e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f\": container with ID starting with e352ce2740f891ade1323b7439f3fdc44fc1008f3c0e8069237c5ecc5aca5c4f not found: ID does not exist" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.642694 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.649464 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84fbbffdc5-bsfmf"] Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.653302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.677815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"ovn-copy-data\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " pod="openstack/ovn-copy-data" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.707389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3911e64b-266d-4c66-9aec-4e26cec73c06" path="/var/lib/kubelet/pods/3911e64b-266d-4c66-9aec-4e26cec73c06/volumes" Feb 21 08:10:45 crc kubenswrapper[4820]: I0221 08:10:45.751739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 08:10:46 crc kubenswrapper[4820]: I0221 08:10:46.295303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 08:10:46 crc kubenswrapper[4820]: I0221 08:10:46.594398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerStarted","Data":"8dd551c3890db1e73ddd2531407ed1073b385c0ce262dc89304db8e225ef25b4"} Feb 21 08:10:47 crc kubenswrapper[4820]: I0221 08:10:47.602670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerStarted","Data":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} Feb 21 08:10:47 crc kubenswrapper[4820]: I0221 08:10:47.617807 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.432816303 podStartE2EDuration="3.617789588s" podCreationTimestamp="2026-02-21 08:10:44 +0000 UTC" firstStartedPulling="2026-02-21 08:10:46.303070716 +0000 UTC m=+5021.336154914" lastFinishedPulling="2026-02-21 08:10:46.488044001 +0000 UTC m=+5021.521128199" observedRunningTime="2026-02-21 08:10:47.615740492 +0000 UTC m=+5022.648824690" watchObservedRunningTime="2026-02-21 08:10:47.617789588 +0000 UTC m=+5022.650873776" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.744279 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.746281 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748210 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748497 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748733 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.748911 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-trcmc" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.759005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.840900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942648 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.942782 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943650 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-scripts\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.943713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b120b4-ea8d-499d-a8ca-43faa31f000e-config\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.949326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.951605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.955209 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b120b4-ea8d-499d-a8ca-43faa31f000e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:52 crc kubenswrapper[4820]: I0221 08:10:52.966048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5ww\" (UniqueName: \"kubernetes.io/projected/f9b120b4-ea8d-499d-a8ca-43faa31f000e-kube-api-access-dx5ww\") pod \"ovn-northd-0\" (UID: \"f9b120b4-ea8d-499d-a8ca-43faa31f000e\") " pod="openstack/ovn-northd-0" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.113905 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.565553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 21 08:10:53 crc kubenswrapper[4820]: W0221 08:10:53.572704 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b120b4_ea8d_499d_a8ca_43faa31f000e.slice/crio-775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d WatchSource:0}: Error finding container 775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d: Status 404 returned error can't find the container with id 775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.646486 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"775c25c262f8f07f427662f3a70d59b379c47377d246d25aee6f7158fd5cb38d"} Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.715382 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.788123 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:53 crc kubenswrapper[4820]: I0221 08:10:53.788365 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" containerID="cri-o://6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" gracePeriod=10 Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.382522 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.488906 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") pod \"8165e702-d96e-4273-8536-7e6e363482d4\" (UID: \"8165e702-d96e-4273-8536-7e6e363482d4\") " Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.495673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2" (OuterVolumeSpecName: "kube-api-access-dvvn2") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "kube-api-access-dvvn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.529960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config" (OuterVolumeSpecName: "config") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.537411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8165e702-d96e-4273-8536-7e6e363482d4" (UID: "8165e702-d96e-4273-8536-7e6e363482d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.590960 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.591010 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvvn2\" (UniqueName: \"kubernetes.io/projected/8165e702-d96e-4273-8536-7e6e363482d4-kube-api-access-dvvn2\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.591025 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165e702-d96e-4273-8536-7e6e363482d4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.658138 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"9afdb48844b3ad1c0e7a303a434c5fd3ff0eb1584d240bd465041435e7b5bcc5"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660519 4820 generic.go:334] "Generic (PLEG): container finished" podID="8165e702-d96e-4273-8536-7e6e363482d4" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" exitCode=0 Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660693 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.661404 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" event={"ID":"8165e702-d96e-4273-8536-7e6e363482d4","Type":"ContainerDied","Data":"41d8b8dbadec17fbbc4f67602cdb951273b7c33c0b12dcd66df04c7b23b9452c"} Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.661440 4820 scope.go:117] "RemoveContainer" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.660744 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79496f79cc-zmwwz" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.689440 4820 scope.go:117] "RemoveContainer" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.692297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724370 4820 scope.go:117] "RemoveContainer" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: E0221 08:10:54.724837 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": container with ID starting with 6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62 not found: ID does not exist" containerID="6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724871 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62"} err="failed to get container status \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": rpc error: code = NotFound desc = could not find container \"6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62\": container with ID starting with 6deee3788f69b2a7b7e584419fb590f5d2bd21aaa5f879352c69aabde2af7f62 not found: ID does not exist" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.724891 4820 scope.go:117] "RemoveContainer" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.725031 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79496f79cc-zmwwz"] Feb 21 08:10:54 crc kubenswrapper[4820]: E0221 08:10:54.725304 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": container with ID starting with e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a not found: ID does not exist" containerID="e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a" Feb 21 08:10:54 crc kubenswrapper[4820]: I0221 08:10:54.725354 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a"} err="failed to get container status \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": rpc error: code = NotFound desc = could not find container \"e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a\": container with ID starting with e36b721c273d140fdfef533b0dcf05cba20cc6ef83b9395a3c70e823f7f1c60a not found: ID does not exist" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.669113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f9b120b4-ea8d-499d-a8ca-43faa31f000e","Type":"ContainerStarted","Data":"9a6065d2d09784afcdd78a1d0210c15622686db5b1c2522b9d329bf49c577286"} Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.669783 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.693988 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.835659679 podStartE2EDuration="3.693965732s" podCreationTimestamp="2026-02-21 08:10:52 +0000 UTC" firstStartedPulling="2026-02-21 08:10:53.575563805 +0000 UTC m=+5028.608648003" lastFinishedPulling="2026-02-21 08:10:54.433869858 +0000 UTC m=+5029.466954056" observedRunningTime="2026-02-21 08:10:55.689328106 +0000 UTC m=+5030.722412324" watchObservedRunningTime="2026-02-21 08:10:55.693965732 +0000 UTC m=+5030.727049930" Feb 21 08:10:55 crc kubenswrapper[4820]: I0221 08:10:55.708626 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8165e702-d96e-4273-8536-7e6e363482d4" path="/var/lib/kubelet/pods/8165e702-d96e-4273-8536-7e6e363482d4/volumes" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.715473 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:57 crc kubenswrapper[4820]: E0221 08:10:57.717051 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717152 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: E0221 08:10:57.717320 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="init" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717415 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="init" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.717674 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8165e702-d96e-4273-8536-7e6e363482d4" containerName="dnsmasq-dns" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.718342 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.718549 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.741121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.741400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.803446 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.805355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.808214 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.812510 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.843422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.845041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.863796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"keystone-db-create-l4whm\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.945956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:57 crc kubenswrapper[4820]: I0221 08:10:57.965032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"keystone-a50c-account-create-update-p6g4x\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.038619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.124157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.554063 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:10:58 crc kubenswrapper[4820]: W0221 08:10:58.558042 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d64f747_d529_4e8f_b2ea_11458f16f00c.slice/crio-e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167 WatchSource:0}: Error finding container e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167: Status 404 returned error can't find the container with id e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167 Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.619660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:10:58 crc kubenswrapper[4820]: W0221 08:10:58.621682 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41e7890_6ac4_4d64_aded_2e5934d7ceee.slice/crio-abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557 WatchSource:0}: Error finding container abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557: Status 404 returned error can't find the container with id abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557 Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.702675 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerStarted","Data":"abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.705428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerStarted","Data":"ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.705463 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerStarted","Data":"e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167"} Feb 21 08:10:58 crc kubenswrapper[4820]: I0221 08:10:58.720843 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-l4whm" podStartSLOduration=1.720825999 podStartE2EDuration="1.720825999s" podCreationTimestamp="2026-02-21 08:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:10:58.718003132 +0000 UTC m=+5033.751087330" watchObservedRunningTime="2026-02-21 08:10:58.720825999 +0000 UTC m=+5033.753910197" Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.711555 4820 generic.go:334] "Generic (PLEG): container finished" podID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerID="afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb" exitCode=0 Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.711623 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerDied","Data":"afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb"} Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.713591 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerID="ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97" exitCode=0 Feb 21 08:10:59 crc kubenswrapper[4820]: I0221 08:10:59.713701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerDied","Data":"ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.140794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.146768 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.200984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") pod \"8d64f747-d529-4e8f-b2ea-11458f16f00c\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201269 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") pod \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201355 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") pod \"8d64f747-d529-4e8f-b2ea-11458f16f00c\" (UID: \"8d64f747-d529-4e8f-b2ea-11458f16f00c\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.201503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") pod \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\" (UID: \"e41e7890-6ac4-4d64-aded-2e5934d7ceee\") " Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.202074 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e41e7890-6ac4-4d64-aded-2e5934d7ceee" (UID: "e41e7890-6ac4-4d64-aded-2e5934d7ceee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.202124 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d64f747-d529-4e8f-b2ea-11458f16f00c" (UID: "8d64f747-d529-4e8f-b2ea-11458f16f00c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.210487 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn" (OuterVolumeSpecName: "kube-api-access-994zn") pod "8d64f747-d529-4e8f-b2ea-11458f16f00c" (UID: "8d64f747-d529-4e8f-b2ea-11458f16f00c"). InnerVolumeSpecName "kube-api-access-994zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.210527 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65" (OuterVolumeSpecName: "kube-api-access-qjt65") pod "e41e7890-6ac4-4d64-aded-2e5934d7ceee" (UID: "e41e7890-6ac4-4d64-aded-2e5934d7ceee"). InnerVolumeSpecName "kube-api-access-qjt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303279 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d64f747-d529-4e8f-b2ea-11458f16f00c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303315 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e41e7890-6ac4-4d64-aded-2e5934d7ceee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303324 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994zn\" (UniqueName: \"kubernetes.io/projected/8d64f747-d529-4e8f-b2ea-11458f16f00c-kube-api-access-994zn\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.303335 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjt65\" (UniqueName: \"kubernetes.io/projected/e41e7890-6ac4-4d64-aded-2e5934d7ceee-kube-api-access-qjt65\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.743379 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l4whm" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.743601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l4whm" event={"ID":"8d64f747-d529-4e8f-b2ea-11458f16f00c","Type":"ContainerDied","Data":"e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.744012 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00d15e13327c7cdcd9149f7d7111a91f8d99512eb99938b91e6226851b65167" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746461 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a50c-account-create-update-p6g4x" Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746432 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a50c-account-create-update-p6g4x" event={"ID":"e41e7890-6ac4-4d64-aded-2e5934d7ceee","Type":"ContainerDied","Data":"abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557"} Feb 21 08:11:01 crc kubenswrapper[4820]: I0221 08:11:01.746663 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe1cc7d0d78192b7e40d7db6b1080f8154648cbe94b684551bdf9dfe36b5557" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427252 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:03 crc kubenswrapper[4820]: E0221 08:11:03.427657 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427675 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: E0221 08:11:03.427692 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427699 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427877 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" containerName="mariadb-account-create-update" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.427903 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" containerName="mariadb-database-create" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.428563 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.434124 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435104 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435113 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.435620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.441137 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546525 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.546570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.647981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.648034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.648073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.656693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.666112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.673176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"keystone-db-sync-spcxr\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:03 crc kubenswrapper[4820]: I0221 08:11:03.746110 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:04 crc kubenswrapper[4820]: I0221 08:11:04.295459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:11:04 crc kubenswrapper[4820]: I0221 08:11:04.771416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerStarted","Data":"e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1"} Feb 21 08:11:12 crc kubenswrapper[4820]: I0221 08:11:12.832564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerStarted","Data":"07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373"} Feb 21 08:11:12 crc kubenswrapper[4820]: I0221 08:11:12.857275 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-spcxr" podStartSLOduration=2.379123751 podStartE2EDuration="9.857233534s" podCreationTimestamp="2026-02-21 08:11:03 +0000 UTC" firstStartedPulling="2026-02-21 08:11:04.303904419 +0000 UTC m=+5039.336988627" lastFinishedPulling="2026-02-21 08:11:11.782014212 +0000 UTC m=+5046.815098410" observedRunningTime="2026-02-21 08:11:12.85339149 +0000 UTC m=+5047.886475698" watchObservedRunningTime="2026-02-21 08:11:12.857233534 +0000 UTC m=+5047.890317742" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.177634 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.816531 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.816621 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.843650 4820 generic.go:334] "Generic (PLEG): container finished" podID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerID="07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373" exitCode=0 Feb 21 08:11:13 crc kubenswrapper[4820]: I0221 08:11:13.843718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerDied","Data":"07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373"} Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.180733 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245661 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.245865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") pod \"211ff6a9-0360-4606-92ca-cd4904494ff6\" (UID: \"211ff6a9-0360-4606-92ca-cd4904494ff6\") " Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.252813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw" (OuterVolumeSpecName: "kube-api-access-dz4dw") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "kube-api-access-dz4dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.273448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.296462 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data" (OuterVolumeSpecName: "config-data") pod "211ff6a9-0360-4606-92ca-cd4904494ff6" (UID: "211ff6a9-0360-4606-92ca-cd4904494ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.346944 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.346987 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211ff6a9-0360-4606-92ca-cd4904494ff6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.347002 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz4dw\" (UniqueName: \"kubernetes.io/projected/211ff6a9-0360-4606-92ca-cd4904494ff6-kube-api-access-dz4dw\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-spcxr" event={"ID":"211ff6a9-0360-4606-92ca-cd4904494ff6","Type":"ContainerDied","Data":"e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1"} Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858967 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e032efda30137eb490834e6dc85e9de283414db4a3a67a9c09d38739a3eb83b1" Feb 21 08:11:15 crc kubenswrapper[4820]: I0221 08:11:15.858702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-spcxr" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084001 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:16 crc kubenswrapper[4820]: E0221 08:11:16.084453 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084477 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.084698 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" containerName="keystone-db-sync" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.085821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.122428 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.158997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.159095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.189660 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.190704 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.193296 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.204675 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.204933 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.205087 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.205260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.265315 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266588 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266643 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.266809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.267760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.268377 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.269032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.287600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.345337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"dnsmasq-dns-6776586657-khcd6\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.371617 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.376227 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.376524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.385500 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.401410 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.402719 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"keystone-bootstrap-lqg8w\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.550138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:16 crc kubenswrapper[4820]: I0221 08:11:16.871660 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.022634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:17 crc kubenswrapper[4820]: W0221 08:11:17.026707 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb765426_53ee_4c41_a313_5ddf7591b6a9.slice/crio-760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4 WatchSource:0}: Error finding container 760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4: Status 404 returned error can't find the container with id 760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4 Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.887187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerStarted","Data":"1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.887560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerStarted","Data":"760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890432 4820 generic.go:334] "Generic (PLEG): container finished" podID="805ecde9-528b-45f4-a438-42c7799bab7b" containerID="057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c" exitCode=0 Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.890499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerStarted","Data":"db76e306f3b16314f5537d5d9c142291f2db91510eba8dc788421390eb44ddd6"} Feb 21 08:11:17 crc kubenswrapper[4820]: I0221 08:11:17.906484 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lqg8w" podStartSLOduration=1.9064668299999998 podStartE2EDuration="1.90646683s" podCreationTimestamp="2026-02-21 08:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:17.905624798 +0000 UTC m=+5052.938708996" watchObservedRunningTime="2026-02-21 08:11:17.90646683 +0000 UTC m=+5052.939551028" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.245571 4820 scope.go:117] "RemoveContainer" containerID="eb27aecc6ecdd33121cbb1ef730b34658946fa8c269080b0841bca37cd76c02f" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.900626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerStarted","Data":"8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983"} Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.900698 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:18 crc kubenswrapper[4820]: I0221 08:11:18.926640 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6776586657-khcd6" podStartSLOduration=2.926623302 podStartE2EDuration="2.926623302s" podCreationTimestamp="2026-02-21 08:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:18.923083596 +0000 UTC m=+5053.956167794" watchObservedRunningTime="2026-02-21 08:11:18.926623302 +0000 UTC m=+5053.959707500" Feb 21 08:11:21 crc kubenswrapper[4820]: I0221 08:11:21.937269 4820 generic.go:334] "Generic (PLEG): container finished" podID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerID="1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557" exitCode=0 Feb 21 08:11:21 crc kubenswrapper[4820]: I0221 08:11:21.937350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerDied","Data":"1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557"} Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.286012 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386435 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386576 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.386595 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") pod \"db765426-53ee-4c41-a313-5ddf7591b6a9\" (UID: \"db765426-53ee-4c41-a313-5ddf7591b6a9\") " Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.391735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.392037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts" (OuterVolumeSpecName: "scripts") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.392197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr" (OuterVolumeSpecName: "kube-api-access-pkgjr") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "kube-api-access-pkgjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.393516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.410655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.411194 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data" (OuterVolumeSpecName: "config-data") pod "db765426-53ee-4c41-a313-5ddf7591b6a9" (UID: "db765426-53ee-4c41-a313-5ddf7591b6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488526 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgjr\" (UniqueName: \"kubernetes.io/projected/db765426-53ee-4c41-a313-5ddf7591b6a9-kube-api-access-pkgjr\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488574 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488586 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488595 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488604 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.488613 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db765426-53ee-4c41-a313-5ddf7591b6a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqg8w" event={"ID":"db765426-53ee-4c41-a313-5ddf7591b6a9","Type":"ContainerDied","Data":"760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4"} Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956375 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760770bb265c6623de57547a6108c0923b44cb58142576b97d32953c12424cb4" Feb 21 08:11:23 crc kubenswrapper[4820]: I0221 08:11:23.956403 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqg8w" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.025898 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.031739 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lqg8w"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.124460 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: E0221 08:11:24.125165 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.125182 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.125437 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" containerName="keystone-bootstrap" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.126151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.128732 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129097 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129086 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.129768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.132505 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205730 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.205954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.206352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308706 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308823 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.308880 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.313302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.328092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.328690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.329749 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.332676 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.345390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"keystone-bootstrap-cf89p\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.448898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.899800 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:11:24 crc kubenswrapper[4820]: I0221 08:11:24.963705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerStarted","Data":"d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e"} Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.705818 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db765426-53ee-4c41-a313-5ddf7591b6a9" path="/var/lib/kubelet/pods/db765426-53ee-4c41-a313-5ddf7591b6a9/volumes" Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.971273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerStarted","Data":"08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d"} Feb 21 08:11:25 crc kubenswrapper[4820]: I0221 08:11:25.991402 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cf89p" podStartSLOduration=1.991381971 podStartE2EDuration="1.991381971s" podCreationTimestamp="2026-02-21 08:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:25.988924265 +0000 UTC m=+5061.022008473" watchObservedRunningTime="2026-02-21 08:11:25.991381971 +0000 UTC m=+5061.024466169" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.403454 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.478336 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.478591 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c565c565-4g68w" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" containerID="cri-o://af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" gracePeriod=10 Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.962780 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981700 4820 generic.go:334] "Generic (PLEG): container finished" podID="623dbf87-d39f-4026-9aa5-72d52508407b" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" exitCode=0 Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981793 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c565c565-4g68w" Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c565c565-4g68w" event={"ID":"623dbf87-d39f-4026-9aa5-72d52508407b","Type":"ContainerDied","Data":"f79446424404530462274952dece2308d0d1ba04fb18b87302d89305eb07556f"} Feb 21 08:11:26 crc kubenswrapper[4820]: I0221 08:11:26.981838 4820 scope.go:117] "RemoveContainer" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.012453 4820 scope.go:117] "RemoveContainer" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.066945 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.067022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") pod \"623dbf87-d39f-4026-9aa5-72d52508407b\" (UID: \"623dbf87-d39f-4026-9aa5-72d52508407b\") " Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.067587 4820 scope.go:117] "RemoveContainer" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: E0221 08:11:27.068372 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": container with ID starting with af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156 not found: ID does not exist" containerID="af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.068574 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156"} err="failed to get container status \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": rpc error: code = NotFound desc = could not find container \"af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156\": container with ID starting with af8052f412c943374120d8b51d8c3d4d24889e70a244a5e96954506167971156 not found: ID does not exist" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.068606 4820 scope.go:117] "RemoveContainer" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: E0221 08:11:27.069212 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": container with ID starting with f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6 not found: ID does not exist" containerID="f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.069308 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6"} err="failed to get container status \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": rpc error: code = NotFound desc = could not find container \"f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6\": container with ID starting with f626901541173379455a81e8ff44d03b7873e1bc1726ecd876280a14744221e6 not found: ID does not exist" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.084722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq" (OuterVolumeSpecName: "kube-api-access-296sq") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "kube-api-access-296sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.124905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.130769 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config" (OuterVolumeSpecName: "config") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.137810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.148496 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "623dbf87-d39f-4026-9aa5-72d52508407b" (UID: "623dbf87-d39f-4026-9aa5-72d52508407b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169136 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169170 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169181 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169397 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-296sq\" (UniqueName: \"kubernetes.io/projected/623dbf87-d39f-4026-9aa5-72d52508407b-kube-api-access-296sq\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.169548 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/623dbf87-d39f-4026-9aa5-72d52508407b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.314188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.326718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c565c565-4g68w"] Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.706824 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" path="/var/lib/kubelet/pods/623dbf87-d39f-4026-9aa5-72d52508407b/volumes" Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.992615 4820 generic.go:334] "Generic (PLEG): container finished" podID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerID="08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d" exitCode=0 Feb 21 08:11:27 crc kubenswrapper[4820]: I0221 08:11:27.992658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerDied","Data":"08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d"} Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.300713 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407750 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407846 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407900 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.407926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.408030 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") pod \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\" (UID: \"85662cfe-6ca0-41d0-8858-4e63cd77f3c6\") " Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429390 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429432 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts" (OuterVolumeSpecName: "scripts") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b" (OuterVolumeSpecName: "kube-api-access-tfz8b") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "kube-api-access-tfz8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.429664 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.438601 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data" (OuterVolumeSpecName: "config-data") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.440092 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85662cfe-6ca0-41d0-8858-4e63cd77f3c6" (UID: "85662cfe-6ca0-41d0-8858-4e63cd77f3c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510052 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510107 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfz8b\" (UniqueName: \"kubernetes.io/projected/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-kube-api-access-tfz8b\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510126 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510140 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510153 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:29 crc kubenswrapper[4820]: I0221 08:11:29.510164 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85662cfe-6ca0-41d0-8858-4e63cd77f3c6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cf89p" event={"ID":"85662cfe-6ca0-41d0-8858-4e63cd77f3c6","Type":"ContainerDied","Data":"d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e"} Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012879 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7fb6b6b44fcf02e8336483e409e263a611f82667614ed7cd8f3db4ee1a7b24e" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.012881 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cf89p" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.090803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091129 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091173 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="init" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091183 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="init" Feb 21 08:11:30 crc kubenswrapper[4820]: E0221 08:11:30.091200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091206 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091369 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" containerName="keystone-bootstrap" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091380 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="623dbf87-d39f-4026-9aa5-72d52508407b" containerName="dnsmasq-dns" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.091884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.093736 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.094189 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.095848 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.095904 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-48tx9" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.096036 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.096525 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.111048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.223886 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224105 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224272 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.224377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326230 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326361 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.326435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331486 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-scripts\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331502 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-fernet-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.331862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-config-data\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-credential-keys\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-public-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.332504 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-combined-ca-bundle\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.339026 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f763cab-817e-415e-bb73-4e077fa0c745-internal-tls-certs\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.354947 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwr9\" (UniqueName: \"kubernetes.io/projected/1f763cab-817e-415e-bb73-4e077fa0c745-kube-api-access-mgwr9\") pod \"keystone-fcdf4b996-mcbdr\" (UID: \"1f763cab-817e-415e-bb73-4e077fa0c745\") " pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.417119 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:30 crc kubenswrapper[4820]: I0221 08:11:30.901926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcdf4b996-mcbdr"] Feb 21 08:11:31 crc kubenswrapper[4820]: I0221 08:11:31.021851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcdf4b996-mcbdr" event={"ID":"1f763cab-817e-415e-bb73-4e077fa0c745","Type":"ContainerStarted","Data":"67606be793907b9f7fd5d9fa4cc8d7b8d471061d111dfd0bc0b44463c61f875a"} Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.031258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcdf4b996-mcbdr" event={"ID":"1f763cab-817e-415e-bb73-4e077fa0c745","Type":"ContainerStarted","Data":"99ea8ebf293e30a49f96437393013670d4715846d581ac21a58c00b4b9225020"} Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.031574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:11:32 crc kubenswrapper[4820]: I0221 08:11:32.054209 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-fcdf4b996-mcbdr" podStartSLOduration=2.054187972 podStartE2EDuration="2.054187972s" podCreationTimestamp="2026-02-21 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:11:32.047559602 +0000 UTC m=+5067.080643800" watchObservedRunningTime="2026-02-21 08:11:32.054187972 +0000 UTC m=+5067.087272170" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.816595 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817097 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817137 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817818 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:11:43 crc kubenswrapper[4820]: I0221 08:11:43.817875 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" gracePeriod=600 Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.135841 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" exitCode=0 Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.135909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897"} Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.136442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} Feb 21 08:11:45 crc kubenswrapper[4820]: I0221 08:11:45.136469 4820 scope.go:117] "RemoveContainer" containerID="47976c4a5f4ff91b6105ec052d294890d70a8cd9f9f412e659394252711dab83" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.007613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-fcdf4b996-mcbdr" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.694674 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.695820 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703045 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703632 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b64vn" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.703864 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.738335 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: E0221 08:12:02.738926 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-hgbmb openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-hgbmb openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="7aaf850e-4879-4971-aff1-b9e669395079" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.745399 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.817835 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.819176 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.831599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.961896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:02 crc kubenswrapper[4820]: I0221 08:12:02.962075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.063701 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.064274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.065560 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.069862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.074086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.084915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"openstackclient\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.143292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.298360 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.305274 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7aaf850e-4879-4971-aff1-b9e669395079" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.311514 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.557904 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.567763 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:12:03 crc kubenswrapper[4820]: I0221 08:12:03.706960 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aaf850e-4879-4971-aff1-b9e669395079" path="/var/lib/kubelet/pods/7aaf850e-4879-4971-aff1-b9e669395079/volumes" Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.307751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0690f7f6-8a8e-4c10-92b5-31640a2a46b1","Type":"ContainerStarted","Data":"018cc0eb075c25a159f0d4f6d7b7d2a1a4f2eb823a973e7c639603f562270ccf"} Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.307769 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:12:04 crc kubenswrapper[4820]: I0221 08:12:04.315603 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7aaf850e-4879-4971-aff1-b9e669395079" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" Feb 21 08:12:15 crc kubenswrapper[4820]: I0221 08:12:15.390972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0690f7f6-8a8e-4c10-92b5-31640a2a46b1","Type":"ContainerStarted","Data":"d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469"} Feb 21 08:12:15 crc kubenswrapper[4820]: I0221 08:12:15.412697 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.228803462 podStartE2EDuration="13.412677521s" podCreationTimestamp="2026-02-21 08:12:02 +0000 UTC" firstStartedPulling="2026-02-21 08:12:03.567490319 +0000 UTC m=+5098.600574517" lastFinishedPulling="2026-02-21 08:12:14.751364358 +0000 UTC m=+5109.784448576" observedRunningTime="2026-02-21 08:12:15.40558937 +0000 UTC m=+5110.438673558" watchObservedRunningTime="2026-02-21 08:12:15.412677521 +0000 UTC m=+5110.445761719" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.972467 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.973949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.976000 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.978363 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.979415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.988028 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:41 crc kubenswrapper[4820]: I0221 08:13:41.998467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.040747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142321 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.142527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.143169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.143812 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.162674 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"barbican-db-create-w88hx\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.163308 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"barbican-5a31-account-create-update-p74qt\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.306962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.313335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.736922 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:13:42 crc kubenswrapper[4820]: I0221 08:13:42.799653 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:13:42 crc kubenswrapper[4820]: W0221 08:13:42.807905 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a61ba7_b697_4b33_8ed3_9dda50a2c415.slice/crio-cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559 WatchSource:0}: Error finding container cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559: Status 404 returned error can't find the container with id cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559 Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.054879 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerStarted","Data":"135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.055497 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerStarted","Data":"cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058516 4820 generic.go:334] "Generic (PLEG): container finished" podID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerID="501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7" exitCode=0 Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerDied","Data":"501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.058611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerStarted","Data":"58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794"} Feb 21 08:13:43 crc kubenswrapper[4820]: I0221 08:13:43.069892 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5a31-account-create-update-p74qt" podStartSLOduration=2.06986628 podStartE2EDuration="2.06986628s" podCreationTimestamp="2026-02-21 08:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:43.066694963 +0000 UTC m=+5198.099779161" watchObservedRunningTime="2026-02-21 08:13:43.06986628 +0000 UTC m=+5198.102950478" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.066626 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerID="135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4" exitCode=0 Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.066686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerDied","Data":"135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4"} Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.410606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.582886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") pod \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583040 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") pod \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\" (UID: \"0fea2a27-a57a-4827-8e17-5d19ef7bba28\") " Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fea2a27-a57a-4827-8e17-5d19ef7bba28" (UID: "0fea2a27-a57a-4827-8e17-5d19ef7bba28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.583735 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fea2a27-a57a-4827-8e17-5d19ef7bba28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.588310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4" (OuterVolumeSpecName: "kube-api-access-qw2c4") pod "0fea2a27-a57a-4827-8e17-5d19ef7bba28" (UID: "0fea2a27-a57a-4827-8e17-5d19ef7bba28"). InnerVolumeSpecName "kube-api-access-qw2c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:44 crc kubenswrapper[4820]: I0221 08:13:44.685159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2c4\" (UniqueName: \"kubernetes.io/projected/0fea2a27-a57a-4827-8e17-5d19ef7bba28-kube-api-access-qw2c4\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076142 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w88hx" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076146 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w88hx" event={"ID":"0fea2a27-a57a-4827-8e17-5d19ef7bba28","Type":"ContainerDied","Data":"58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794"} Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.076578 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58604e5039d452a5e4dca0e77afd3357e776cfdfc271080e2bd76a283335c794" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.380035 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.496584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") pod \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.496837 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") pod \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\" (UID: \"d4a61ba7-b697-4b33-8ed3-9dda50a2c415\") " Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.497136 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4a61ba7-b697-4b33-8ed3-9dda50a2c415" (UID: "d4a61ba7-b697-4b33-8ed3-9dda50a2c415"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.497863 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.500811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5" (OuterVolumeSpecName: "kube-api-access-ksws5") pod "d4a61ba7-b697-4b33-8ed3-9dda50a2c415" (UID: "d4a61ba7-b697-4b33-8ed3-9dda50a2c415"). InnerVolumeSpecName "kube-api-access-ksws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:45 crc kubenswrapper[4820]: I0221 08:13:45.599223 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksws5\" (UniqueName: \"kubernetes.io/projected/d4a61ba7-b697-4b33-8ed3-9dda50a2c415-kube-api-access-ksws5\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.083637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a31-account-create-update-p74qt" event={"ID":"d4a61ba7-b697-4b33-8ed3-9dda50a2c415","Type":"ContainerDied","Data":"cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559"} Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.083690 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd70a967ebca8c1b6239d552afa7e33f8a6e3b4e70a41da7f691365abda34559" Feb 21 08:13:46 crc kubenswrapper[4820]: I0221 08:13:46.084342 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a31-account-create-update-p74qt" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.418066 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:47 crc kubenswrapper[4820]: E0221 08:13:47.419592 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.419717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: E0221 08:13:47.419797 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.419865 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420096 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" containerName="mariadb-account-create-update" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420196 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" containerName="mariadb-database-create" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.420879 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.424249 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.424383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9x2ph" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.433668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.441463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.542776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.549137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.553729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.565052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"barbican-db-sync-kncz4\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:47 crc kubenswrapper[4820]: I0221 08:13:47.748313 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:48 crc kubenswrapper[4820]: I0221 08:13:48.200093 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:13:48 crc kubenswrapper[4820]: W0221 08:13:48.205639 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2285cbc5_545d_463d_ae4a_350c3fd26323.slice/crio-5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011 WatchSource:0}: Error finding container 5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011: Status 404 returned error can't find the container with id 5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011 Feb 21 08:13:49 crc kubenswrapper[4820]: I0221 08:13:49.109464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerStarted","Data":"5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011"} Feb 21 08:13:53 crc kubenswrapper[4820]: I0221 08:13:53.146425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerStarted","Data":"2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5"} Feb 21 08:13:53 crc kubenswrapper[4820]: I0221 08:13:53.168941 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kncz4" podStartSLOduration=1.779823202 podStartE2EDuration="6.168920667s" podCreationTimestamp="2026-02-21 08:13:47 +0000 UTC" firstStartedPulling="2026-02-21 08:13:48.208675379 +0000 UTC m=+5203.241759577" lastFinishedPulling="2026-02-21 08:13:52.597772844 +0000 UTC m=+5207.630857042" observedRunningTime="2026-02-21 08:13:53.166219285 +0000 UTC m=+5208.199303503" watchObservedRunningTime="2026-02-21 08:13:53.168920667 +0000 UTC m=+5208.202004865" Feb 21 08:13:54 crc kubenswrapper[4820]: I0221 08:13:54.155432 4820 generic.go:334] "Generic (PLEG): container finished" podID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerID="2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5" exitCode=0 Feb 21 08:13:54 crc kubenswrapper[4820]: I0221 08:13:54.155467 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerDied","Data":"2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5"} Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.479747 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.603745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.603865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.606872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") pod \"2285cbc5-545d-463d-ae4a-350c3fd26323\" (UID: \"2285cbc5-545d-463d-ae4a-350c3fd26323\") " Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.611132 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t" (OuterVolumeSpecName: "kube-api-access-xdx4t") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "kube-api-access-xdx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.611656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.627343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2285cbc5-545d-463d-ae4a-350c3fd26323" (UID: "2285cbc5-545d-463d-ae4a-350c3fd26323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdx4t\" (UniqueName: \"kubernetes.io/projected/2285cbc5-545d-463d-ae4a-350c3fd26323-kube-api-access-xdx4t\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709374 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:55 crc kubenswrapper[4820]: I0221 08:13:55.709387 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2285cbc5-545d-463d-ae4a-350c3fd26323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kncz4" event={"ID":"2285cbc5-545d-463d-ae4a-350c3fd26323","Type":"ContainerDied","Data":"5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011"} Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173364 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddbc6f0bbb52f94fb64169984b46682325c6a7f7aeb5e2edeaee1a33b82b011" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.173426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kncz4" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.390959 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:56 crc kubenswrapper[4820]: E0221 08:13:56.391393 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.391411 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.391584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" containerName="barbican-db-sync" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.394406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.401999 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.403608 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.403911 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9x2ph" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421380 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421767 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.421854 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.423013 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.429218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.436612 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.454397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.491152 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.492461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.509812 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.523668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.526572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1f442bc-072b-483e-8821-3ee262e5aa4e-logs\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.527298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data-custom\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.528070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-combined-ca-bundle\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.531114 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f442bc-072b-483e-8821-3ee262e5aa4e-config-data\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.544621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlppc\" (UniqueName: \"kubernetes.io/projected/c1f442bc-072b-483e-8821-3ee262e5aa4e-kube-api-access-zlppc\") pod \"barbican-worker-769cf6fd65-dfls2\" (UID: \"c1f442bc-072b-483e-8821-3ee262e5aa4e\") " pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.586610 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.601256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.606400 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.607943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.627785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-logs\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.628920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.629467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.632285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data-custom\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.634623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-combined-ca-bundle\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.646096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4skw\" (UniqueName: \"kubernetes.io/projected/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-kube-api-access-r4skw\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.649265 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4-config-data\") pod \"barbican-keystone-listener-754674bd8d-6lxjs\" (UID: \"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4\") " pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.649381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"dnsmasq-dns-56f68c4f9-lzs2s\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.718859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769cf6fd65-dfls2" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729516 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729622 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729694 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.729752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.730179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.734365 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.735435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.739548 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.746195 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.752767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"barbican-api-7686494894-42qqd\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.812835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:56 crc kubenswrapper[4820]: I0221 08:13:56.916691 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.238966 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769cf6fd65-dfls2"] Feb 21 08:13:57 crc kubenswrapper[4820]: W0221 08:13:57.241850 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1f442bc_072b_483e_8821_3ee262e5aa4e.slice/crio-362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5 WatchSource:0}: Error finding container 362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5: Status 404 returned error can't find the container with id 362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5 Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.320496 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-754674bd8d-6lxjs"] Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.413951 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:13:57 crc kubenswrapper[4820]: I0221 08:13:57.501100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.192434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"ee3d488b58002a926a798c1f2416707be14f762c34d6d00a88c3289b17cd8b50"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.195195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"362a51eb612a6a1acca6f31c98500cd7772f9ff5d9e13abfd167c35779427ce5"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196821 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" exitCode=0 Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.196902 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerStarted","Data":"e4f69cac7c8ea8139b87e81abdba2e547b7e3f99598e2c84fa49315dcdd98eeb"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerStarted","Data":"6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838"} Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.199695 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.248115 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7686494894-42qqd" podStartSLOduration=2.2480966430000002 podStartE2EDuration="2.248096643s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:58.24096744 +0000 UTC m=+5213.274051638" watchObservedRunningTime="2026-02-21 08:13:58.248096643 +0000 UTC m=+5213.281180841" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.691586 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.693354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.695633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.696971 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.701546 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.766581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.766982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.767551 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877676 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877815 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.877961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.878168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-logs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.882154 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data-custom\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.882930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-public-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.884932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-combined-ca-bundle\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.886318 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-internal-tls-certs\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.886688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-config-data\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:58 crc kubenswrapper[4820]: I0221 08:13:58.894484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8d7g\" (UniqueName: \"kubernetes.io/projected/08d7d55d-2b0b-40fe-9b1c-5930358bebe8-kube-api-access-c8d7g\") pod \"barbican-api-5cf69c945b-fsc4w\" (UID: \"08d7d55d-2b0b-40fe-9b1c-5930358bebe8\") " pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.016045 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.212925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"598c72d84b9155d180086f27a39db53f86b07307551e8ddb993a05f723d49f9b"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.214442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerStarted","Data":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.214727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.219756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"5c2f5a88f4d426683efadcfb835fa8b78a87c229c33ee36cbecc458d1672c7ed"} Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.219821 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.233055 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" podStartSLOduration=3.233033312 podStartE2EDuration="3.233033312s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:13:59.231974914 +0000 UTC m=+5214.265059112" watchObservedRunningTime="2026-02-21 08:13:59.233033312 +0000 UTC m=+5214.266117520" Feb 21 08:13:59 crc kubenswrapper[4820]: I0221 08:13:59.506401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf69c945b-fsc4w"] Feb 21 08:13:59 crc kubenswrapper[4820]: W0221 08:13:59.511449 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d7d55d_2b0b_40fe_9b1c_5930358bebe8.slice/crio-eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd WatchSource:0}: Error finding container eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd: Status 404 returned error can't find the container with id eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.229688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"ff78b6c637a07d36357c56a079878ffbfcd326aced08254ac34aaf5ac3ab147b"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.229984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"16d3b75b707e1e3c9c054954ee90b4f680a6ea059f9363a937a01d7323a65c8c"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf69c945b-fsc4w" event={"ID":"08d7d55d-2b0b-40fe-9b1c-5930358bebe8","Type":"ContainerStarted","Data":"eb7f0bf22917315494c3dd64c413e0dfcffcd24f540086024f42c6f5399701bd"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230067 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.230091 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.232378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769cf6fd65-dfls2" event={"ID":"c1f442bc-072b-483e-8821-3ee262e5aa4e","Type":"ContainerStarted","Data":"cbc82c1d949b4157d44143d8c5e4e63b85f508822330886cace3ee128a860431"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.234375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" event={"ID":"d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4","Type":"ContainerStarted","Data":"e0f7a87ef776a8b4a6b4f14148f27d1ad9348bb917c7801dacde9d3c2da2572b"} Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.254474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cf69c945b-fsc4w" podStartSLOduration=2.254452529 podStartE2EDuration="2.254452529s" podCreationTimestamp="2026-02-21 08:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:00.24822155 +0000 UTC m=+5215.281305748" watchObservedRunningTime="2026-02-21 08:14:00.254452529 +0000 UTC m=+5215.287536727" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.271457 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-754674bd8d-6lxjs" podStartSLOduration=2.7445653759999997 podStartE2EDuration="4.271444648s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="2026-02-21 08:13:57.32361419 +0000 UTC m=+5212.356698388" lastFinishedPulling="2026-02-21 08:13:58.850493462 +0000 UTC m=+5213.883577660" observedRunningTime="2026-02-21 08:14:00.268483748 +0000 UTC m=+5215.301567946" watchObservedRunningTime="2026-02-21 08:14:00.271444648 +0000 UTC m=+5215.304528846" Feb 21 08:14:00 crc kubenswrapper[4820]: I0221 08:14:00.290694 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-769cf6fd65-dfls2" podStartSLOduration=2.702040066 podStartE2EDuration="4.290670649s" podCreationTimestamp="2026-02-21 08:13:56 +0000 UTC" firstStartedPulling="2026-02-21 08:13:57.244341735 +0000 UTC m=+5212.277425933" lastFinishedPulling="2026-02-21 08:13:58.832972318 +0000 UTC m=+5213.866056516" observedRunningTime="2026-02-21 08:14:00.281401967 +0000 UTC m=+5215.314486175" watchObservedRunningTime="2026-02-21 08:14:00.290670649 +0000 UTC m=+5215.323754847" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.517793 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.534741 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf69c945b-fsc4w" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.620804 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.621140 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" containerID="cri-o://8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" gracePeriod=30 Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.621321 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" containerID="cri-o://bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" gracePeriod=30 Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.631424 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": EOF" Feb 21 08:14:05 crc kubenswrapper[4820]: I0221 08:14:05.631506 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": EOF" Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.285951 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerID="8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" exitCode=143 Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.286022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d"} Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.815009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.903059 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:06 crc kubenswrapper[4820]: I0221 08:14:06.903364 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6776586657-khcd6" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" containerID="cri-o://8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" gracePeriod=10 Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.318521 4820 generic.go:334] "Generic (PLEG): container finished" podID="805ecde9-528b-45f4-a438-42c7799bab7b" containerID="8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" exitCode=0 Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.318815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983"} Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.431367 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521558 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521643 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.521785 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") pod \"805ecde9-528b-45f4-a438-42c7799bab7b\" (UID: \"805ecde9-528b-45f4-a438-42c7799bab7b\") " Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.526394 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k" (OuterVolumeSpecName: "kube-api-access-hpt7k") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "kube-api-access-hpt7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.563763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.566390 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config" (OuterVolumeSpecName: "config") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.576730 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.592740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "805ecde9-528b-45f4-a438-42c7799bab7b" (UID: "805ecde9-528b-45f4-a438-42c7799bab7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623222 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623267 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623277 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpt7k\" (UniqueName: \"kubernetes.io/projected/805ecde9-528b-45f4-a438-42c7799bab7b-kube-api-access-hpt7k\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623289 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:07 crc kubenswrapper[4820]: I0221 08:14:07.623297 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805ecde9-528b-45f4-a438-42c7799bab7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.337710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6776586657-khcd6" event={"ID":"805ecde9-528b-45f4-a438-42c7799bab7b","Type":"ContainerDied","Data":"db76e306f3b16314f5537d5d9c142291f2db91510eba8dc788421390eb44ddd6"} Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.337777 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6776586657-khcd6" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.338070 4820 scope.go:117] "RemoveContainer" containerID="8f215298561c4a58c13338e8e9d0bb05dbf28f207c7aca70826053c4615fb983" Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.364879 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.369727 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6776586657-khcd6"] Feb 21 08:14:08 crc kubenswrapper[4820]: I0221 08:14:08.379375 4820 scope.go:117] "RemoveContainer" containerID="057a9fc88ae8a1df7b41fb4caaf76a1bf24268155aeabdcb9c5614189e8f2e4c" Feb 21 08:14:09 crc kubenswrapper[4820]: I0221 08:14:09.706576 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" path="/var/lib/kubelet/pods/805ecde9-528b-45f4-a438-42c7799bab7b/volumes" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.023273 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": read tcp 10.217.0.2:34896->10.217.1.34:9311: read: connection reset by peer" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.023375 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7686494894-42qqd" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.34:9311/healthcheck\": read tcp 10.217.0.2:34908->10.217.1.34:9311: read: connection reset by peer" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.378811 4820 generic.go:334] "Generic (PLEG): container finished" podID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerID="bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" exitCode=0 Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.378869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472"} Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.379171 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7686494894-42qqd" event={"ID":"ca3b906b-fd95-4d1f-a82f-18663d7cb683","Type":"ContainerDied","Data":"6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838"} Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.379186 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ecb19021e2d9bc235c4223e0bff1c84d022aa0f268cb6537621cb5e3479a838" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.389468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487309 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.487445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") pod \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\" (UID: \"ca3b906b-fd95-4d1f-a82f-18663d7cb683\") " Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.488252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs" (OuterVolumeSpecName: "logs") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.492448 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69" (OuterVolumeSpecName: "kube-api-access-4sp69") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "kube-api-access-4sp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.492639 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.510336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.535538 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data" (OuterVolumeSpecName: "config-data") pod "ca3b906b-fd95-4d1f-a82f-18663d7cb683" (UID: "ca3b906b-fd95-4d1f-a82f-18663d7cb683"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588900 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca3b906b-fd95-4d1f-a82f-18663d7cb683-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588938 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588951 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588966 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca3b906b-fd95-4d1f-a82f-18663d7cb683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:11 crc kubenswrapper[4820]: I0221 08:14:11.588977 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sp69\" (UniqueName: \"kubernetes.io/projected/ca3b906b-fd95-4d1f-a82f-18663d7cb683-kube-api-access-4sp69\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.386350 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7686494894-42qqd" Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.413288 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:12 crc kubenswrapper[4820]: I0221 08:14:12.420088 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7686494894-42qqd"] Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.705943 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" path="/var/lib/kubelet/pods/ca3b906b-fd95-4d1f-a82f-18663d7cb683/volumes" Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.816148 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:14:13 crc kubenswrapper[4820]: I0221 08:14:13.816218 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:14:43 crc kubenswrapper[4820]: I0221 08:14:43.816914 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:14:43 crc kubenswrapper[4820]: I0221 08:14:43.817508 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.025943 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.026953 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.026977 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027001 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027007 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027027 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="init" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027034 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="init" Feb 21 08:14:45 crc kubenswrapper[4820]: E0221 08:14:45.027055 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027061 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api-log" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027492 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="805ecde9-528b-45f4-a438-42c7799bab7b" containerName="dnsmasq-dns" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.027503 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b906b-fd95-4d1f-a82f-18663d7cb683" containerName="barbican-api" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.028659 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.037363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.108501 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.109565 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.116504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.121178 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.155015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.155089 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.256946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.257065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.259307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.275042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"neutron-db-create-jzbnq\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.351645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.358379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.358458 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.359168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.376839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"neutron-3bef-account-create-update-7n4bl\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.438394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.778182 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:14:45 crc kubenswrapper[4820]: I0221 08:14:45.911248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:14:45 crc kubenswrapper[4820]: W0221 08:14:45.919743 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80901dca_016d_4c52_b87d_f953b0689f1a.slice/crio-d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652 WatchSource:0}: Error finding container d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652: Status 404 returned error can't find the container with id d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.638902 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerID="2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430" exitCode=0 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.638974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerDied","Data":"2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.639010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerStarted","Data":"58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640218 4820 generic.go:334] "Generic (PLEG): container finished" podID="80901dca-016d-4c52-b87d-f953b0689f1a" containerID="e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af" exitCode=0 Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerDied","Data":"e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af"} Feb 21 08:14:46 crc kubenswrapper[4820]: I0221 08:14:46.640308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerStarted","Data":"d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.015982 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.023098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.107840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") pod \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") pod \"80901dca-016d-4c52-b87d-f953b0689f1a\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108150 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") pod \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\" (UID: \"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108165 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") pod \"80901dca-016d-4c52-b87d-f953b0689f1a\" (UID: \"80901dca-016d-4c52-b87d-f953b0689f1a\") " Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108779 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80901dca-016d-4c52-b87d-f953b0689f1a" (UID: "80901dca-016d-4c52-b87d-f953b0689f1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.108821 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" (UID: "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.109095 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.109115 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80901dca-016d-4c52-b87d-f953b0689f1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.117459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b" (OuterVolumeSpecName: "kube-api-access-wh46b") pod "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" (UID: "4e0e7c5f-32ab-470c-a8eb-b0067af1ce22"). InnerVolumeSpecName "kube-api-access-wh46b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.117545 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr" (OuterVolumeSpecName: "kube-api-access-7csdr") pod "80901dca-016d-4c52-b87d-f953b0689f1a" (UID: "80901dca-016d-4c52-b87d-f953b0689f1a"). InnerVolumeSpecName "kube-api-access-7csdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.211009 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh46b\" (UniqueName: \"kubernetes.io/projected/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22-kube-api-access-wh46b\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.211036 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7csdr\" (UniqueName: \"kubernetes.io/projected/80901dca-016d-4c52-b87d-f953b0689f1a-kube-api-access-7csdr\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzbnq" event={"ID":"4e0e7c5f-32ab-470c-a8eb-b0067af1ce22","Type":"ContainerDied","Data":"58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656605 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58f2984dd91efe48a6f8437de3afb08db1f631136177db0245a991aa6d5c950a" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.656410 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzbnq" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bef-account-create-update-7n4bl" event={"ID":"80901dca-016d-4c52-b87d-f953b0689f1a","Type":"ContainerDied","Data":"d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652"} Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657821 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2476c6970ed31b87ae06c75ff5d3d790809e5e9d2953f253aa58ef3bf2d6652" Feb 21 08:14:48 crc kubenswrapper[4820]: I0221 08:14:48.657911 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bef-account-create-update-7n4bl" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.309901 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:50 crc kubenswrapper[4820]: E0221 08:14:50.310371 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: E0221 08:14:50.310405 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310413 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310617 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" containerName="mariadb-database-create" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.310653 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" containerName="mariadb-account-create-update" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.311329 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314444 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zbxkp" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.314468 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.320694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459684 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.459942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561338 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.561542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.567294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.568687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.579088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"neutron-db-sync-6768b\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:50 crc kubenswrapper[4820]: I0221 08:14:50.632791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.090181 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.691994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerStarted","Data":"150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1"} Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.692319 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerStarted","Data":"b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf"} Feb 21 08:14:51 crc kubenswrapper[4820]: I0221 08:14:51.712119 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6768b" podStartSLOduration=1.712097805 podStartE2EDuration="1.712097805s" podCreationTimestamp="2026-02-21 08:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:51.707837429 +0000 UTC m=+5266.740921627" watchObservedRunningTime="2026-02-21 08:14:51.712097805 +0000 UTC m=+5266.745181993" Feb 21 08:14:55 crc kubenswrapper[4820]: I0221 08:14:55.723423 4820 generic.go:334] "Generic (PLEG): container finished" podID="46c29c61-83db-423e-8e56-52c1637985e2" containerID="150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1" exitCode=0 Feb 21 08:14:55 crc kubenswrapper[4820]: I0221 08:14:55.723512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerDied","Data":"150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1"} Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.032334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.181095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.182130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.182484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") pod \"46c29c61-83db-423e-8e56-52c1637985e2\" (UID: \"46c29c61-83db-423e-8e56-52c1637985e2\") " Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.187695 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76" (OuterVolumeSpecName: "kube-api-access-6jj76") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "kube-api-access-6jj76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.203706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.211811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config" (OuterVolumeSpecName: "config") pod "46c29c61-83db-423e-8e56-52c1637985e2" (UID: "46c29c61-83db-423e-8e56-52c1637985e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286676 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jj76\" (UniqueName: \"kubernetes.io/projected/46c29c61-83db-423e-8e56-52c1637985e2-kube-api-access-6jj76\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.286768 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c29c61-83db-423e-8e56-52c1637985e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6768b" event={"ID":"46c29c61-83db-423e-8e56-52c1637985e2","Type":"ContainerDied","Data":"b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf"} Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744354 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f2799c6d49474e55cf8838e027381e07d2a801685b5b9f577924be80148edf" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.744353 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6768b" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.861537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:57 crc kubenswrapper[4820]: E0221 08:14:57.863744 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.863893 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.864204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c29c61-83db-423e-8e56-52c1637985e2" containerName="neutron-db-sync" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.865543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.884648 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.965955 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.967712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974149 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974266 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zbxkp" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974381 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 21 08:14:57 crc kubenswrapper[4820]: I0221 08:14:57.974481 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.002979 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.003020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.104948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105316 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105441 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105525 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.105557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.107573 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.107883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.108376 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.108728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.128211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"dnsmasq-dns-7c57cf7787-z47mk\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.199975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.210703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.216281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.216373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.218212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.220718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.230468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"neutron-6586587ddd-vncjg\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.296683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.730155 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.753468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerStarted","Data":"aaa04799154f4e72c5b03417ed41306779ddc85795c0bc38fbe0c0a1449205db"} Feb 21 08:14:58 crc kubenswrapper[4820]: W0221 08:14:58.991934 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4cb47d9_b40c_4c9e_bf0c_848e587adc1d.slice/crio-5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe WatchSource:0}: Error finding container 5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe: Status 404 returned error can't find the container with id 5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe Feb 21 08:14:58 crc kubenswrapper[4820]: I0221 08:14:58.997248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.766095 4820 generic.go:334] "Generic (PLEG): container finished" podID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" exitCode=0 Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.766294 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769180 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerStarted","Data":"5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe"} Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.769742 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:14:59 crc kubenswrapper[4820]: I0221 08:14:59.812898 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6586587ddd-vncjg" podStartSLOduration=2.812879075 podStartE2EDuration="2.812879075s" podCreationTimestamp="2026-02-21 08:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:14:59.804813417 +0000 UTC m=+5274.837897615" watchObservedRunningTime="2026-02-21 08:14:59.812879075 +0000 UTC m=+5274.845963273" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.149020 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.150828 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.153943 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.155218 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.168855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.169902 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.270924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.271720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.279450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.288087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"collect-profiles-29527695-zm45p\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.468437 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.469840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.473262 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.474032 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.492852 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.522059 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.576643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.678821 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.679351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.685690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-ovndb-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.686980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-public-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.687909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.688446 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-internal-tls-certs\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.688511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-combined-ca-bundle\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.689567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/546bedfc-a666-471b-9a9f-e4f4dd1e629e-httpd-config\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.703025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2f7\" (UniqueName: \"kubernetes.io/projected/546bedfc-a666-471b-9a9f-e4f4dd1e629e-kube-api-access-gs2f7\") pod \"neutron-67f7f95649-vvsjb\" (UID: \"546bedfc-a666-471b-9a9f-e4f4dd1e629e\") " pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.789552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerStarted","Data":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.789914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.790069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.818697 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" podStartSLOduration=3.818678009 podStartE2EDuration="3.818678009s" podCreationTimestamp="2026-02-21 08:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:00.808640228 +0000 UTC m=+5275.841724426" watchObservedRunningTime="2026-02-21 08:15:00.818678009 +0000 UTC m=+5275.851762207" Feb 21 08:15:00 crc kubenswrapper[4820]: I0221 08:15:00.956835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.077772 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.086856 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pnwbk"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.157390 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67f7f95649-vvsjb"] Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.705842 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a608f92-6849-4847-9b75-495f1d27b9cf" path="/var/lib/kubelet/pods/5a608f92-6849-4847-9b75-495f1d27b9cf/volumes" Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799732 4820 generic.go:334] "Generic (PLEG): container finished" podID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerID="b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2" exitCode=0 Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerDied","Data":"b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.799880 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerStarted","Data":"e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803311 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"92dbeea807896385169f86b9d4a8842bfc3b4cdcd6cb220b3168f76a6416a2d4"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803356 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803373 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"c7267b27d2c5b641794a7c01b72e3f124cbc7e8b026b484f6a960d6994434506"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.803385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67f7f95649-vvsjb" event={"ID":"546bedfc-a666-471b-9a9f-e4f4dd1e629e","Type":"ContainerStarted","Data":"528df5a9372a251b611fb2e5966cabaff9cb4b5c18fae221814dfa7be6ca5c5a"} Feb 21 08:15:01 crc kubenswrapper[4820]: I0221 08:15:01.847216 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67f7f95649-vvsjb" podStartSLOduration=1.847191997 podStartE2EDuration="1.847191997s" podCreationTimestamp="2026-02-21 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:01.838350978 +0000 UTC m=+5276.871435176" watchObservedRunningTime="2026-02-21 08:15:01.847191997 +0000 UTC m=+5276.880276205" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.167514 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220403 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.220543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") pod \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\" (UID: \"6fce41e0-c5c8-4286-8a58-cd620c05f4fc\") " Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.221826 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.227418 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.227506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp" (OuterVolumeSpecName: "kube-api-access-x7tjp") pod "6fce41e0-c5c8-4286-8a58-cd620c05f4fc" (UID: "6fce41e0-c5c8-4286-8a58-cd620c05f4fc"). InnerVolumeSpecName "kube-api-access-x7tjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322259 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322291 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7tjp\" (UniqueName: \"kubernetes.io/projected/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-kube-api-access-x7tjp\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.322300 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fce41e0-c5c8-4286-8a58-cd620c05f4fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" event={"ID":"6fce41e0-c5c8-4286-8a58-cd620c05f4fc","Type":"ContainerDied","Data":"e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe"} Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817457 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p" Feb 21 08:15:03 crc kubenswrapper[4820]: I0221 08:15:03.817464 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4332f57da00c4a1d4978769ec6441f0918ffa92115a1413b17aa52a9e83aebe" Feb 21 08:15:04 crc kubenswrapper[4820]: I0221 08:15:04.232146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 08:15:04 crc kubenswrapper[4820]: I0221 08:15:04.240253 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527650-5nntb"] Feb 21 08:15:05 crc kubenswrapper[4820]: I0221 08:15:05.708642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9686bf95-baf7-4066-8769-66f168be0215" path="/var/lib/kubelet/pods/9686bf95-baf7-4066-8769-66f168be0215/volumes" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.201423 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.287339 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.287647 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" containerID="cri-o://99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" gracePeriod=10 Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.755963 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856481 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" exitCode=0 Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856562 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" event={"ID":"f5b4d95c-af87-417e-a56b-20cb7a43c2e7","Type":"ContainerDied","Data":"e4f69cac7c8ea8139b87e81abdba2e547b7e3f99598e2c84fa49315dcdd98eeb"} Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856581 4820 scope.go:117] "RemoveContainer" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.856735 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f68c4f9-lzs2s" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.878781 4820 scope.go:117] "RemoveContainer" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897421 4820 scope.go:117] "RemoveContainer" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: E0221 08:15:08.897857 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": container with ID starting with 99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877 not found: ID does not exist" containerID="99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897904 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877"} err="failed to get container status \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": rpc error: code = NotFound desc = could not find container \"99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877\": container with ID starting with 99991f97a1ab69ded338761268074b7e5eb31eda39239db7c409479622cbb877 not found: ID does not exist" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.897935 4820 scope.go:117] "RemoveContainer" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: E0221 08:15:08.898322 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": container with ID starting with fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260 not found: ID does not exist" containerID="fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.898345 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260"} err="failed to get container status \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": rpc error: code = NotFound desc = could not find container \"fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260\": container with ID starting with fcef6a671d534b591742f99f71cc3811a1266fa30849aa539b8d7d63c46de260 not found: ID does not exist" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910322 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.910441 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") pod \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\" (UID: \"f5b4d95c-af87-417e-a56b-20cb7a43c2e7\") " Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.915630 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z" (OuterVolumeSpecName: "kube-api-access-7h68z") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "kube-api-access-7h68z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.961182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.963514 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.967558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:08 crc kubenswrapper[4820]: I0221 08:15:08.975438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config" (OuterVolumeSpecName: "config") pod "f5b4d95c-af87-417e-a56b-20cb7a43c2e7" (UID: "f5b4d95c-af87-417e-a56b-20cb7a43c2e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013086 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h68z\" (UniqueName: \"kubernetes.io/projected/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-kube-api-access-7h68z\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013123 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013135 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013147 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.013157 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b4d95c-af87-417e-a56b-20cb7a43c2e7-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.187291 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.196451 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56f68c4f9-lzs2s"] Feb 21 08:15:09 crc kubenswrapper[4820]: I0221 08:15:09.706093 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" path="/var/lib/kubelet/pods/f5b4d95c-af87-417e-a56b-20cb7a43c2e7/volumes" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.815943 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816004 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816540 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:15:13 crc kubenswrapper[4820]: I0221 08:15:13.816648 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" gracePeriod=600 Feb 21 08:15:13 crc kubenswrapper[4820]: E0221 08:15:13.939740 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914314 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" exitCode=0 Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a"} Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.914393 4820 scope.go:117] "RemoveContainer" containerID="8b576e514e31a08e28f68fa4c688b72455bc5c0da6c05b78101822bef0984897" Feb 21 08:15:14 crc kubenswrapper[4820]: I0221 08:15:14.915373 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:14 crc kubenswrapper[4820]: E0221 08:15:14.915950 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:18 crc kubenswrapper[4820]: I0221 08:15:18.453165 4820 scope.go:117] "RemoveContainer" containerID="c2867835bac0090aaa7273a7c4ef4cb3c7da8d37f816ccb9d979c732e69cab4f" Feb 21 08:15:18 crc kubenswrapper[4820]: I0221 08:15:18.474750 4820 scope.go:117] "RemoveContainer" containerID="3c304cff3e4ea891fe22f2e446f6db20e2204d9e270769a7f2bedb12df9f52ce" Feb 21 08:15:27 crc kubenswrapper[4820]: I0221 08:15:27.696978 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:27 crc kubenswrapper[4820]: E0221 08:15:27.697691 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:28 crc kubenswrapper[4820]: I0221 08:15:28.308115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.802530 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67f7f95649-vvsjb" Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.858874 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.859198 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6586587ddd-vncjg" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" containerID="cri-o://287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" gracePeriod=30 Feb 21 08:15:30 crc kubenswrapper[4820]: I0221 08:15:30.859708 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6586587ddd-vncjg" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" containerID="cri-o://11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" gracePeriod=30 Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224374 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224696 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224713 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224731 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="init" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224737 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="init" Feb 21 08:15:31 crc kubenswrapper[4820]: E0221 08:15:31.224753 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224760 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b4d95c-af87-417e-a56b-20cb7a43c2e7" containerName="dnsmasq-dns" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.224928 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" containerName="collect-profiles" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.226082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.255912 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256801 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.256908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.360297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361787 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.361953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.383538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"redhat-operators-kmzgn\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.543816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:31 crc kubenswrapper[4820]: I0221 08:15:31.999401 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.060417 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerID="11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" exitCode=0 Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.060444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9"} Feb 21 08:15:32 crc kubenswrapper[4820]: I0221 08:15:32.061579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"8f92b520fb9e9df10aafe11fead045065f6b3ec213cdd43575b3f0ea407190ec"} Feb 21 08:15:33 crc kubenswrapper[4820]: I0221 08:15:33.071429 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" exitCode=0 Feb 21 08:15:33 crc kubenswrapper[4820]: I0221 08:15:33.071717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9"} Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.678555 4820 generic.go:334] "Generic (PLEG): container finished" podID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerID="287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" exitCode=0 Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.679146 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4"} Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.909516 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.971904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.971954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.972488 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.973009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.973169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") pod \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\" (UID: \"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d\") " Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.978311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc" (OuterVolumeSpecName: "kube-api-access-sbdqc") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "kube-api-access-sbdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:35 crc kubenswrapper[4820]: I0221 08:15:35.978449 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.021784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config" (OuterVolumeSpecName: "config") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.030347 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.041596 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" (UID: "d4cb47d9-b40c-4c9e-bf0c-848e587adc1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076467 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076520 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076534 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdqc\" (UniqueName: \"kubernetes.io/projected/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-kube-api-access-sbdqc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076546 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.076607 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6586587ddd-vncjg" event={"ID":"d4cb47d9-b40c-4c9e-bf0c-848e587adc1d","Type":"ContainerDied","Data":"5f47582cecf306ea8d6ece4b7af5c6232216da6b6b11776fed71214288e4fabe"} Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688187 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6586587ddd-vncjg" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.688478 4820 scope.go:117] "RemoveContainer" containerID="11861d6d5175d35d2314b25d1793d717cd4d57bb4ff0b720acacb28e4b5c5bd9" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.691283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.708711 4820 scope.go:117] "RemoveContainer" containerID="287b5cf3eb5205f19c544a35f0ea17dff1354f8b91f6da2b382af68232aa11a4" Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.732214 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:36 crc kubenswrapper[4820]: I0221 08:15:36.739586 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6586587ddd-vncjg"] Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.703731 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" exitCode=0 Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.709542 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" path="/var/lib/kubelet/pods/d4cb47d9-b40c-4c9e-bf0c-848e587adc1d/volumes" Feb 21 08:15:37 crc kubenswrapper[4820]: I0221 08:15:37.710123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.216696 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.217528 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.217548 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217555 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217713 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-api" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.217725 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb47d9-b40c-4c9e-bf0c-848e587adc1d" containerName="neutron-httpd" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.218228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.223792 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.223999 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.224103 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.224120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.225407 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kqpjg" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.227830 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.291158 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.301318 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.302461 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.310866 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:39 crc kubenswrapper[4820]: E0221 08:15:39.319763 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wlqms ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-7rrmn" podUID="d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.331771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332467 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.332947 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.333302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.336549 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.336773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.337566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.344297 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.350182 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.353228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.361630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.376353 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.377885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.379487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"swift-ring-rebalance-7rrmn\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.407882 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436388 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.436821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.437759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.438716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.442164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.443575 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.444989 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.458304 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"swift-ring-rebalance-8nnn7\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.544206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.545191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.546061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.549098 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.549617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.583051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"dnsmasq-dns-5b6dfcbcff-8zx84\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.620634 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.742084 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.742519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.743478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerStarted","Data":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.809954 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kmzgn" podStartSLOduration=3.211232882 podStartE2EDuration="8.809928925s" podCreationTimestamp="2026-02-21 08:15:31 +0000 UTC" firstStartedPulling="2026-02-21 08:15:33.074403273 +0000 UTC m=+5308.107487471" lastFinishedPulling="2026-02-21 08:15:38.673099316 +0000 UTC m=+5313.706183514" observedRunningTime="2026-02-21 08:15:39.794743114 +0000 UTC m=+5314.827827432" watchObservedRunningTime="2026-02-21 08:15:39.809928925 +0000 UTC m=+5314.843013123" Feb 21 08:15:39 crc kubenswrapper[4820]: I0221 08:15:39.873015 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050125 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050479 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050563 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050628 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") pod \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\" (UID: \"d7e7b15f-5c28-4eab-a61f-e5997caa1e3b\") " Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.050958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts" (OuterVolumeSpecName: "scripts") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.051111 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.051311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.057158 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.060174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.060285 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.061397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms" (OuterVolumeSpecName: "kube-api-access-wlqms") pod "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" (UID: "d7e7b15f-5c28-4eab-a61f-e5997caa1e3b"). InnerVolumeSpecName "kube-api-access-wlqms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153570 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153606 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153621 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153631 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153643 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153653 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.153664 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlqms\" (UniqueName: \"kubernetes.io/projected/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b-kube-api-access-wlqms\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.182925 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8nnn7"] Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.303386 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:15:40 crc kubenswrapper[4820]: W0221 08:15:40.305958 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd72eaa53_54ec_46af_91c3_fcf248385b34.slice/crio-2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff WatchSource:0}: Error finding container 2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff: Status 404 returned error can't find the container with id 2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.763282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerStarted","Data":"4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767062 4820 generic.go:334] "Generic (PLEG): container finished" podID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerID="093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896" exitCode=0 Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerStarted","Data":"2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff"} Feb 21 08:15:40 crc kubenswrapper[4820]: I0221 08:15:40.767775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7rrmn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.002014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.011482 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7rrmn"] Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.551372 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.551630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.709480 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e7b15f-5c28-4eab-a61f-e5997caa1e3b" path="/var/lib/kubelet/pods/d7e7b15f-5c28-4eab-a61f-e5997caa1e3b/volumes" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.780003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerStarted","Data":"b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0"} Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.780053 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:41 crc kubenswrapper[4820]: I0221 08:15:41.805768 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" podStartSLOduration=2.805745555 podStartE2EDuration="2.805745555s" podCreationTimestamp="2026-02-21 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:41.802999361 +0000 UTC m=+5316.836083559" watchObservedRunningTime="2026-02-21 08:15:41.805745555 +0000 UTC m=+5316.838829753" Feb 21 08:15:42 crc kubenswrapper[4820]: I0221 08:15:42.602694 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmzgn" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" probeResult="failure" output=< Feb 21 08:15:42 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:15:42 crc kubenswrapper[4820]: > Feb 21 08:15:42 crc kubenswrapper[4820]: I0221 08:15:42.696739 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:42 crc kubenswrapper[4820]: E0221 08:15:42.697149 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.848134 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.849825 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.851976 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.852328 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.853255 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.865599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:43 crc kubenswrapper[4820]: I0221 08:15:43.929815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.031468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.032037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-log-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.032596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-run-httpd\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.038076 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-config-data\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.038331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-etc-swift\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.040762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-internal-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.041215 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-public-tls-certs\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.041681 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-combined-ca-bundle\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.050388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdqwv\" (UniqueName: \"kubernetes.io/projected/1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d-kube-api-access-vdqwv\") pod \"swift-proxy-cc65c7f54-9sg96\" (UID: \"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d\") " pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:44 crc kubenswrapper[4820]: I0221 08:15:44.180641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:45 crc kubenswrapper[4820]: W0221 08:15:45.418379 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a6cc6cf_14b3_416d_a415_22fbe0dd9b9d.slice/crio-0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1 WatchSource:0}: Error finding container 0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1: Status 404 returned error can't find the container with id 0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1 Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.421121 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc65c7f54-9sg96"] Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.814723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerStarted","Data":"295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.817359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"fc711d1e3d3624a12e28232bfb851ef77cb084672284adb47a256f35df79a888"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.817397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"0a58956ac3d969e40afa82793860a36ca0c8bcc47eeb80403d2164d07fab31e1"} Feb 21 08:15:45 crc kubenswrapper[4820]: I0221 08:15:45.831943 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8nnn7" podStartSLOduration=2.242753654 podStartE2EDuration="6.831922141s" podCreationTimestamp="2026-02-21 08:15:39 +0000 UTC" firstStartedPulling="2026-02-21 08:15:40.186683979 +0000 UTC m=+5315.219768177" lastFinishedPulling="2026-02-21 08:15:44.775852466 +0000 UTC m=+5319.808936664" observedRunningTime="2026-02-21 08:15:45.82970569 +0000 UTC m=+5320.862789898" watchObservedRunningTime="2026-02-21 08:15:45.831922141 +0000 UTC m=+5320.865006339" Feb 21 08:15:46 crc kubenswrapper[4820]: I0221 08:15:46.829823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc65c7f54-9sg96" event={"ID":"1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d","Type":"ContainerStarted","Data":"738a1519d1fe0347989f862b01be1a9eb1eac3eae2efa50e4fa5ed3a4a51f3f0"} Feb 21 08:15:46 crc kubenswrapper[4820]: I0221 08:15:46.855261 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cc65c7f54-9sg96" podStartSLOduration=3.855220418 podStartE2EDuration="3.855220418s" podCreationTimestamp="2026-02-21 08:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:15:46.850618673 +0000 UTC m=+5321.883702871" watchObservedRunningTime="2026-02-21 08:15:46.855220418 +0000 UTC m=+5321.888304626" Feb 21 08:15:47 crc kubenswrapper[4820]: I0221 08:15:47.836419 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:47 crc kubenswrapper[4820]: I0221 08:15:47.836738 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.743554 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.799886 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.800152 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" containerID="cri-o://f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" gracePeriod=10 Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.855825 4820 generic.go:334] "Generic (PLEG): container finished" podID="867214ab-adcb-4e78-838b-a16cda8f543c" containerID="295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293" exitCode=0 Feb 21 08:15:49 crc kubenswrapper[4820]: I0221 08:15:49.855867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerDied","Data":"295e6e465ea00ddb64edf2f9ad1cdbf100ee1235cdc25b8479ed9ca490d1c293"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.285897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.385692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.385774 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386705 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.386734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") pod \"4335ce63-5465-40bb-aedb-f31d8c7807fd\" (UID: \"4335ce63-5465-40bb-aedb-f31d8c7807fd\") " Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.391684 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6" (OuterVolumeSpecName: "kube-api-access-qfsf6") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "kube-api-access-qfsf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.448267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.451792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.456706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.483452 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config" (OuterVolumeSpecName: "config") pod "4335ce63-5465-40bb-aedb-f31d8c7807fd" (UID: "4335ce63-5465-40bb-aedb-f31d8c7807fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488846 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488886 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488900 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488912 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4335ce63-5465-40bb-aedb-f31d8c7807fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.488924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsf6\" (UniqueName: \"kubernetes.io/projected/4335ce63-5465-40bb-aedb-f31d8c7807fd-kube-api-access-qfsf6\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.865918 4820 generic.go:334] "Generic (PLEG): container finished" podID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" exitCode=0 Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866022 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c57cf7787-z47mk" event={"ID":"4335ce63-5465-40bb-aedb-f31d8c7807fd","Type":"ContainerDied","Data":"aaa04799154f4e72c5b03417ed41306779ddc85795c0bc38fbe0c0a1449205db"} Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.866113 4820 scope.go:117] "RemoveContainer" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.893454 4820 scope.go:117] "RemoveContainer" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.901794 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.910096 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c57cf7787-z47mk"] Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.925710 4820 scope.go:117] "RemoveContainer" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: E0221 08:15:50.926491 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": container with ID starting with f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf not found: ID does not exist" containerID="f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.926532 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf"} err="failed to get container status \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": rpc error: code = NotFound desc = could not find container \"f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf\": container with ID starting with f11f90d819f86d63a3762d371d778be1f50ed00f7180642df07fd67516f6b3cf not found: ID does not exist" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.926556 4820 scope.go:117] "RemoveContainer" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: E0221 08:15:50.927183 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": container with ID starting with 578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3 not found: ID does not exist" containerID="578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3" Feb 21 08:15:50 crc kubenswrapper[4820]: I0221 08:15:50.927219 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3"} err="failed to get container status \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": rpc error: code = NotFound desc = could not find container \"578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3\": container with ID starting with 578175fadefed7a0305dc1967d89e71b1671a970338893eeaa95f7eea03e05a3 not found: ID does not exist" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.212936 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301582 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.301840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") pod \"867214ab-adcb-4e78-838b-a16cda8f543c\" (UID: \"867214ab-adcb-4e78-838b-a16cda8f543c\") " Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.302338 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.302770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.305504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc" (OuterVolumeSpecName: "kube-api-access-66vhc") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "kube-api-access-66vhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.307061 4820 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.307580 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.325791 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66vhc\" (UniqueName: \"kubernetes.io/projected/867214ab-adcb-4e78-838b-a16cda8f543c-kube-api-access-66vhc\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.325830 4820 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/867214ab-adcb-4e78-838b-a16cda8f543c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.326637 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts" (OuterVolumeSpecName: "scripts") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.328556 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.330143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "867214ab-adcb-4e78-838b-a16cda8f543c" (UID: "867214ab-adcb-4e78-838b-a16cda8f543c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.427962 4820 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428250 4820 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428263 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/867214ab-adcb-4e78-838b-a16cda8f543c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.428275 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/867214ab-adcb-4e78-838b-a16cda8f543c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.600180 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.656801 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.718915 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" path="/var/lib/kubelet/pods/4335ce63-5465-40bb-aedb-f31d8c7807fd/volumes" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.835114 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.910553 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8nnn7" Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.911057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8nnn7" event={"ID":"867214ab-adcb-4e78-838b-a16cda8f543c","Type":"ContainerDied","Data":"4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192"} Feb 21 08:15:51 crc kubenswrapper[4820]: I0221 08:15:51.911079 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd963e4d1f0445acbcb335e2304cd1d582a06226ce3744e303ccdb9e0f9b192" Feb 21 08:15:52 crc kubenswrapper[4820]: I0221 08:15:52.917273 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kmzgn" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" containerID="cri-o://8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" gracePeriod=2 Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.407249 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569342 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.569586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") pod \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\" (UID: \"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a\") " Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.570381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities" (OuterVolumeSpecName: "utilities") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.575500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s" (OuterVolumeSpecName: "kube-api-access-sf72s") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "kube-api-access-sf72s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.671152 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf72s\" (UniqueName: \"kubernetes.io/projected/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-kube-api-access-sf72s\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.671193 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.703797 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" (UID: "cd0443dd-6920-4f8b-bae9-b6bfe07bde2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.773475 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929034 4820 generic.go:334] "Generic (PLEG): container finished" podID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" exitCode=0 Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmzgn" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmzgn" event={"ID":"cd0443dd-6920-4f8b-bae9-b6bfe07bde2a","Type":"ContainerDied","Data":"8f92b520fb9e9df10aafe11fead045065f6b3ec213cdd43575b3f0ea407190ec"} Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.929547 4820 scope.go:117] "RemoveContainer" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.957341 4820 scope.go:117] "RemoveContainer" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.957734 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.965310 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kmzgn"] Feb 21 08:15:53 crc kubenswrapper[4820]: I0221 08:15:53.978393 4820 scope.go:117] "RemoveContainer" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.016565 4820 scope.go:117] "RemoveContainer" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.016989 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": container with ID starting with 8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673 not found: ID does not exist" containerID="8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.017020 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673"} err="failed to get container status \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": rpc error: code = NotFound desc = could not find container \"8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673\": container with ID starting with 8e5e78d84e00b7b744766b1dbc3b04cc272978c5f1f906a76bee72f14b547673 not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.017039 4820 scope.go:117] "RemoveContainer" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.018576 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": container with ID starting with 7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b not found: ID does not exist" containerID="7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.018623 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b"} err="failed to get container status \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": rpc error: code = NotFound desc = could not find container \"7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b\": container with ID starting with 7efdec675c992a5b17413f97a4e43cbf0e140d85e5fc67385830150af10a1d2b not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.018651 4820 scope.go:117] "RemoveContainer" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.018995 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": container with ID starting with 22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9 not found: ID does not exist" containerID="22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.019016 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9"} err="failed to get container status \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": rpc error: code = NotFound desc = could not find container \"22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9\": container with ID starting with 22be51ebcb929b1605fcc81885d9a5c645d6a1b8456652d2474b4376ad73d2f9 not found: ID does not exist" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.186586 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.187366 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc65c7f54-9sg96" Feb 21 08:15:54 crc kubenswrapper[4820]: I0221 08:15:54.696651 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:15:54 crc kubenswrapper[4820]: E0221 08:15:54.697110 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:15:55 crc kubenswrapper[4820]: I0221 08:15:55.708480 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" path="/var/lib/kubelet/pods/cd0443dd-6920-4f8b-bae9-b6bfe07bde2a/volumes" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.751811 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752563 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="init" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752584 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="init" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752601 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-content" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752611 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-content" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752629 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752637 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752654 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752663 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752692 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752700 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: E0221 08:15:59.752710 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-utilities" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="extract-utilities" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752917 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="867214ab-adcb-4e78-838b-a16cda8f543c" containerName="swift-ring-rebalance" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752942 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4335ce63-5465-40bb-aedb-f31d8c7807fd" containerName="dnsmasq-dns" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.752965 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0443dd-6920-4f8b-bae9-b6bfe07bde2a" containerName="registry-server" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.753672 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.760363 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.858375 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.859640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.861551 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.868339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.874192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.874257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976511 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.976588 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:15:59 crc kubenswrapper[4820]: I0221 08:15:59.977965 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.003005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"cinder-db-create-w9mkt\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.078427 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.079142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.079333 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.080194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.096977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"cinder-30a5-account-create-update-vlqzg\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.174256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.519318 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:16:00 crc kubenswrapper[4820]: W0221 08:16:00.522498 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod526239a3_9756_4dd4_9e38_6474bd1b2709.slice/crio-c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7 WatchSource:0}: Error finding container c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7: Status 404 returned error can't find the container with id c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7 Feb 21 08:16:00 crc kubenswrapper[4820]: W0221 08:16:00.618859 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb19f4a26_20d3_44b1_a159_3fd72a92e68f.slice/crio-c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce WatchSource:0}: Error finding container c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce: Status 404 returned error can't find the container with id c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.624529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.997958 4820 generic.go:334] "Generic (PLEG): container finished" podID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerID="9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa" exitCode=0 Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.998056 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerDied","Data":"9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa"} Feb 21 08:16:00 crc kubenswrapper[4820]: I0221 08:16:00.998334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerStarted","Data":"c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7"} Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006657 4820 generic.go:334] "Generic (PLEG): container finished" podID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerID="2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9" exitCode=0 Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerDied","Data":"2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9"} Feb 21 08:16:01 crc kubenswrapper[4820]: I0221 08:16:01.006733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerStarted","Data":"c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce"} Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.397377 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.401891 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") pod \"526239a3-9756-4dd4-9e38-6474bd1b2709\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522207 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") pod \"526239a3-9756-4dd4-9e38-6474bd1b2709\" (UID: \"526239a3-9756-4dd4-9e38-6474bd1b2709\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") pod \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522490 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") pod \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\" (UID: \"b19f4a26-20d3-44b1-a159-3fd72a92e68f\") " Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "526239a3-9756-4dd4-9e38-6474bd1b2709" (UID: "526239a3-9756-4dd4-9e38-6474bd1b2709"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.522919 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526239a3-9756-4dd4-9e38-6474bd1b2709-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.523327 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b19f4a26-20d3-44b1-a159-3fd72a92e68f" (UID: "b19f4a26-20d3-44b1-a159-3fd72a92e68f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.526987 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c" (OuterVolumeSpecName: "kube-api-access-fps8c") pod "b19f4a26-20d3-44b1-a159-3fd72a92e68f" (UID: "b19f4a26-20d3-44b1-a159-3fd72a92e68f"). InnerVolumeSpecName "kube-api-access-fps8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.529258 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff" (OuterVolumeSpecName: "kube-api-access-vvfff") pod "526239a3-9756-4dd4-9e38-6474bd1b2709" (UID: "526239a3-9756-4dd4-9e38-6474bd1b2709"). InnerVolumeSpecName "kube-api-access-vvfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625316 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19f4a26-20d3-44b1-a159-3fd72a92e68f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvfff\" (UniqueName: \"kubernetes.io/projected/526239a3-9756-4dd4-9e38-6474bd1b2709-kube-api-access-vvfff\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:02 crc kubenswrapper[4820]: I0221 08:16:02.625385 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps8c\" (UniqueName: \"kubernetes.io/projected/b19f4a26-20d3-44b1-a159-3fd72a92e68f-kube-api-access-fps8c\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021786 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9mkt" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9mkt" event={"ID":"526239a3-9756-4dd4-9e38-6474bd1b2709","Type":"ContainerDied","Data":"c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7"} Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.021896 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c801e109fcb5160ae90f3cf5f84049471f0083fae9bc74531aa29485766d40f7" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-30a5-account-create-update-vlqzg" event={"ID":"b19f4a26-20d3-44b1-a159-3fd72a92e68f","Type":"ContainerDied","Data":"c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce"} Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023311 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-30a5-account-create-update-vlqzg" Feb 21 08:16:03 crc kubenswrapper[4820]: I0221 08:16:03.023326 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f2e206ac7c045b42cbd5d57388e5db269f31a14363b6793e9e9c42f2473dce" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.137485 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:05 crc kubenswrapper[4820]: E0221 08:16:05.138067 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138079 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: E0221 08:16:05.138120 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138300 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" containerName="mariadb-database-create" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138312 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" containerName="mariadb-account-create-update" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.138861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.141006 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.141633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.142184 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.145332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.265887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.266169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.266412 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.368859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.374320 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.374984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.375978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.376099 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.385686 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"cinder-db-sync-6fhr4\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.455567 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:05 crc kubenswrapper[4820]: I0221 08:16:05.895290 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:16:06 crc kubenswrapper[4820]: I0221 08:16:06.046454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerStarted","Data":"7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44"} Feb 21 08:16:09 crc kubenswrapper[4820]: I0221 08:16:09.697017 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:09 crc kubenswrapper[4820]: E0221 08:16:09.697927 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:18 crc kubenswrapper[4820]: I0221 08:16:18.570862 4820 scope.go:117] "RemoveContainer" containerID="7db30347c12dd2be7f43e71cdb85bf1d17d0f2f0e04cb11cd9773d0e72d380c5" Feb 21 08:16:20 crc kubenswrapper[4820]: I0221 08:16:20.696929 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:20 crc kubenswrapper[4820]: E0221 08:16:20.697504 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.556993 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.557726 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.557880 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9x56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6fhr4_openstack(918975eb-d5b2-4b0e-9b35-36e92f03527b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 08:16:26 crc kubenswrapper[4820]: E0221 08:16:26.559357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6fhr4" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" Feb 21 08:16:27 crc kubenswrapper[4820]: E0221 08:16:27.219681 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/cinder-db-sync-6fhr4" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" Feb 21 08:16:34 crc kubenswrapper[4820]: I0221 08:16:34.697135 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:34 crc kubenswrapper[4820]: E0221 08:16:34.697983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:43 crc kubenswrapper[4820]: I0221 08:16:43.344279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerStarted","Data":"396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e"} Feb 21 08:16:43 crc kubenswrapper[4820]: I0221 08:16:43.365350 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6fhr4" podStartSLOduration=2.376065914 podStartE2EDuration="38.365327897s" podCreationTimestamp="2026-02-21 08:16:05 +0000 UTC" firstStartedPulling="2026-02-21 08:16:05.901051385 +0000 UTC m=+5340.934135573" lastFinishedPulling="2026-02-21 08:16:41.890313358 +0000 UTC m=+5376.923397556" observedRunningTime="2026-02-21 08:16:43.357921836 +0000 UTC m=+5378.391006054" watchObservedRunningTime="2026-02-21 08:16:43.365327897 +0000 UTC m=+5378.398412095" Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.359631 4820 generic.go:334] "Generic (PLEG): container finished" podID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerID="396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e" exitCode=0 Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.360048 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerDied","Data":"396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e"} Feb 21 08:16:45 crc kubenswrapper[4820]: I0221 08:16:45.702864 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:45 crc kubenswrapper[4820]: E0221 08:16:45.703106 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.700285 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801419 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801602 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.801629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") pod \"918975eb-d5b2-4b0e-9b35-36e92f03527b\" (UID: \"918975eb-d5b2-4b0e-9b35-36e92f03527b\") " Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.803414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808035 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56" (OuterVolumeSpecName: "kube-api-access-q9x56") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "kube-api-access-q9x56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts" (OuterVolumeSpecName: "scripts") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.808581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.834673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.857075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data" (OuterVolumeSpecName: "config-data") pod "918975eb-d5b2-4b0e-9b35-36e92f03527b" (UID: "918975eb-d5b2-4b0e-9b35-36e92f03527b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904110 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904150 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918975eb-d5b2-4b0e-9b35-36e92f03527b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904165 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904177 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904190 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9x56\" (UniqueName: \"kubernetes.io/projected/918975eb-d5b2-4b0e-9b35-36e92f03527b-kube-api-access-q9x56\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:46 crc kubenswrapper[4820]: I0221 08:16:46.904202 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918975eb-d5b2-4b0e-9b35-36e92f03527b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fhr4" event={"ID":"918975eb-d5b2-4b0e-9b35-36e92f03527b","Type":"ContainerDied","Data":"7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44"} Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376699 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fhr4" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.376736 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f522b302e0549e17af1e68fc657579d38aa65e21c0e8afd908be171ed725a44" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.733373 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:47 crc kubenswrapper[4820]: E0221 08:16:47.734003 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.734016 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.734181 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" containerName="cinder-db-sync" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.738227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.768160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.817907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.818334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.818498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.883434 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.884821 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.887118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.887133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.891514 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.892480 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.906087 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.919667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.920700 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.924798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.924978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.926808 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:47 crc kubenswrapper[4820]: I0221 08:16:47.946116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"dnsmasq-dns-98b448c79-xx42c\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021117 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.021381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.064764 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122839 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122957 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.122990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.125709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.127008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.128403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.129205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.129707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.130064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.144038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"cinder-api-0\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.199666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.606713 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:16:48 crc kubenswrapper[4820]: I0221 08:16:48.803754 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.415524 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" exitCode=0 Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.416005 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b"} Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.416046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerStarted","Data":"147651eae6f2d4d4506345601d2cf298cfe763874e04c3aa44b45feb488eb2f6"} Feb 21 08:16:49 crc kubenswrapper[4820]: I0221 08:16:49.419864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"2d11fa328973af76ce6736058e8404d7bf1916d12ea6f6adb1ab2cb8b5681fe6"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.031175 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.462799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerStarted","Data":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.464311 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerStarted","Data":"3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99"} Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470610 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" containerID="cri-o://3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" gracePeriod=30 Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470627 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.470662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" containerID="cri-o://1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" gracePeriod=30 Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.506112 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.506092423 podStartE2EDuration="3.506092423s" podCreationTimestamp="2026-02-21 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:16:50.503113023 +0000 UTC m=+5385.536197221" watchObservedRunningTime="2026-02-21 08:16:50.506092423 +0000 UTC m=+5385.539176611" Feb 21 08:16:50 crc kubenswrapper[4820]: I0221 08:16:50.515442 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98b448c79-xx42c" podStartSLOduration=3.515415075 podStartE2EDuration="3.515415075s" podCreationTimestamp="2026-02-21 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:16:50.487930202 +0000 UTC m=+5385.521014440" watchObservedRunningTime="2026-02-21 08:16:50.515415075 +0000 UTC m=+5385.548499273" Feb 21 08:16:51 crc kubenswrapper[4820]: I0221 08:16:51.480574 4820 generic.go:334] "Generic (PLEG): container finished" podID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerID="3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" exitCode=143 Feb 21 08:16:51 crc kubenswrapper[4820]: I0221 08:16:51.480657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99"} Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.067087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.124013 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.124319 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" containerID="cri-o://b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" gracePeriod=10 Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.533154 4820 generic.go:334] "Generic (PLEG): container finished" podID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerID="b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" exitCode=0 Feb 21 08:16:58 crc kubenswrapper[4820]: I0221 08:16:58.533198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0"} Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.179441 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216116 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216224 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.216424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") pod \"d72eaa53-54ec-46af-91c3-fcf248385b34\" (UID: \"d72eaa53-54ec-46af-91c3-fcf248385b34\") " Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.240459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d" (OuterVolumeSpecName: "kube-api-access-4mw5d") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "kube-api-access-4mw5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.270747 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.281561 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config" (OuterVolumeSpecName: "config") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.291451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.297005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d72eaa53-54ec-46af-91c3-fcf248385b34" (UID: "d72eaa53-54ec-46af-91c3-fcf248385b34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318035 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318068 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318078 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mw5d\" (UniqueName: \"kubernetes.io/projected/d72eaa53-54ec-46af-91c3-fcf248385b34-kube-api-access-4mw5d\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318090 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.318100 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d72eaa53-54ec-46af-91c3-fcf248385b34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" event={"ID":"d72eaa53-54ec-46af-91c3-fcf248385b34","Type":"ContainerDied","Data":"2f99dfc7ad3da5248fb62aa73368b7c0f9aa9159b1578a6a81f20720fbce41ff"} Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544077 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dfcbcff-8zx84" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.544138 4820 scope.go:117] "RemoveContainer" containerID="b0e87fab33e7400789b46a574269ce095e4c5e8100d4eb2abda1c5d023d41eb0" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.574841 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.575339 4820 scope.go:117] "RemoveContainer" containerID="093a84d21a6636251da79290a491d1bbf076f8c343441c7e7b5d8f0efd814896" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.581250 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dfcbcff-8zx84"] Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.697409 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:16:59 crc kubenswrapper[4820]: E0221 08:16:59.698040 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:16:59 crc kubenswrapper[4820]: I0221 08:16:59.730322 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" path="/var/lib/kubelet/pods/d72eaa53-54ec-46af-91c3-fcf248385b34/volumes" Feb 21 08:17:00 crc kubenswrapper[4820]: I0221 08:17:00.187342 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:17:10 crc kubenswrapper[4820]: I0221 08:17:10.697325 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:10 crc kubenswrapper[4820]: E0221 08:17:10.698498 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:20 crc kubenswrapper[4820]: I0221 08:17:20.726120 4820 generic.go:334] "Generic (PLEG): container finished" podID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerID="1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" exitCode=137 Feb 21 08:17:20 crc kubenswrapper[4820]: I0221 08:17:20.726296 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373"} Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.485113 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.637985 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638112 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638151 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") pod \"195a168a-1e3f-4880-a8b8-a74c58b8adad\" (UID: \"195a168a-1e3f-4880-a8b8-a74c58b8adad\") " Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.638727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs" (OuterVolumeSpecName: "logs") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.639071 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195a168a-1e3f-4880-a8b8-a74c58b8adad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.639128 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195a168a-1e3f-4880-a8b8-a74c58b8adad-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.643969 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.645151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9" (OuterVolumeSpecName: "kube-api-access-gkjt9") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "kube-api-access-gkjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.645698 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts" (OuterVolumeSpecName: "scripts") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.663103 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.681141 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data" (OuterVolumeSpecName: "config-data") pod "195a168a-1e3f-4880-a8b8-a74c58b8adad" (UID: "195a168a-1e3f-4880-a8b8-a74c58b8adad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195a168a-1e3f-4880-a8b8-a74c58b8adad","Type":"ContainerDied","Data":"2d11fa328973af76ce6736058e8404d7bf1916d12ea6f6adb1ab2cb8b5681fe6"} Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736799 4820 scope.go:117] "RemoveContainer" containerID="1616e9b30f29f4e34993aeddcfadecb67139c974d297d46c974496e22c415373" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.736832 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742891 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjt9\" (UniqueName: \"kubernetes.io/projected/195a168a-1e3f-4880-a8b8-a74c58b8adad-kube-api-access-gkjt9\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742929 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742944 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742958 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.742969 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195a168a-1e3f-4880-a8b8-a74c58b8adad-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.774589 4820 scope.go:117] "RemoveContainer" containerID="3b68c218ee4231985f5230cf5e6e2b3a0881329794e510c46103e1621fc1be99" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.782903 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.795104 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.810824 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811200 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="init" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811220 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="init" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811253 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811260 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811282 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: E0221 08:17:21.811297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811461 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811472 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" containerName="cinder-api-log" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.811485 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72eaa53-54ec-46af-91c3-fcf248385b34" containerName="dnsmasq-dns" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.812451 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821394 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821610 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.821991 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.822106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvzc8" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.822120 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.823843 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.825802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844258 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.844465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.845475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.845577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.947765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.948984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949499 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.950988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.949866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.952078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953207 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.953458 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.954462 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.954817 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.955906 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:21 crc kubenswrapper[4820]: I0221 08:17:21.968336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"cinder-api-0\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " pod="openstack/cinder-api-0" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.131354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.374287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.697059 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:22 crc kubenswrapper[4820]: E0221 08:17:22.697334 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:22 crc kubenswrapper[4820]: I0221 08:17:22.745327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"3267b575fe33eb154cb252c0c5802dd5fb1d40c51fa9f2b771a061f7ac089e05"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.707471 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195a168a-1e3f-4880-a8b8-a74c58b8adad" path="/var/lib/kubelet/pods/195a168a-1e3f-4880-a8b8-a74c58b8adad/volumes" Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerStarted","Data":"93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888"} Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.758210 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:17:23 crc kubenswrapper[4820]: I0221 08:17:23.779671 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.779648947 podStartE2EDuration="2.779648947s" podCreationTimestamp="2026-02-21 08:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:17:23.77788444 +0000 UTC m=+5418.810968638" watchObservedRunningTime="2026-02-21 08:17:23.779648947 +0000 UTC m=+5418.812733155" Feb 21 08:17:26 crc kubenswrapper[4820]: I0221 08:17:26.570600 4820 scope.go:117] "RemoveContainer" containerID="1b8f99fcda2042506493b66359457c8391b7f432d8588bbaf5a6223727d8c557" Feb 21 08:17:33 crc kubenswrapper[4820]: I0221 08:17:33.696869 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:33 crc kubenswrapper[4820]: E0221 08:17:33.697668 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:34 crc kubenswrapper[4820]: I0221 08:17:34.395858 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:17:46 crc kubenswrapper[4820]: I0221 08:17:46.696917 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:46 crc kubenswrapper[4820]: E0221 08:17:46.697640 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.417381 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.419641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.423140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.445536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454221 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454387 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.454780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557279 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.557690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.565898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.566430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.567976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.568033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.581512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"cinder-scheduler-0\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " pod="openstack/cinder-scheduler-0" Feb 21 08:17:52 crc kubenswrapper[4820]: I0221 08:17:52.739165 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.183407 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.194134 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802004 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802506 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" containerID="cri-o://93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" gracePeriod=30 Feb 21 08:17:53 crc kubenswrapper[4820]: I0221 08:17:53.802607 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" containerID="cri-o://8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" gracePeriod=30 Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.056740 4820 generic.go:334] "Generic (PLEG): container finished" podID="d61adc77-1750-4151-8591-10ba08713538" containerID="93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" exitCode=143 Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.056873 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888"} Feb 21 08:17:54 crc kubenswrapper[4820]: I0221 08:17:54.060629 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"5624406c32ba41e07bb32ce7d2cd8141f5f7956b50a9b42d7db5ce9a49ade7fc"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.072614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.072950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerStarted","Data":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.093633 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.837772618 podStartE2EDuration="3.093611561s" podCreationTimestamp="2026-02-21 08:17:52 +0000 UTC" firstStartedPulling="2026-02-21 08:17:53.193805368 +0000 UTC m=+5448.226889566" lastFinishedPulling="2026-02-21 08:17:53.449644311 +0000 UTC m=+5448.482728509" observedRunningTime="2026-02-21 08:17:55.087784324 +0000 UTC m=+5450.120868522" watchObservedRunningTime="2026-02-21 08:17:55.093611561 +0000 UTC m=+5450.126695759" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.530025 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.532065 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.557498 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617056 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.617292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.720461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.721418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.723064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.750705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"certified-operators-99422\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:55 crc kubenswrapper[4820]: I0221 08:17:55.872004 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:17:56 crc kubenswrapper[4820]: I0221 08:17:56.506723 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:17:56 crc kubenswrapper[4820]: W0221 08:17:56.532478 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c352da_ee5d_4dc2_b5b0_ba5e0e29e272.slice/crio-1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af WatchSource:0}: Error finding container 1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af: Status 404 returned error can't find the container with id 1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.101778 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481" exitCode=0 Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.101883 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.102220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.104797 4820 generic.go:334] "Generic (PLEG): container finished" podID="d61adc77-1750-4151-8591-10ba08713538" containerID="8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" exitCode=0 Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.104863 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d"} Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.132735 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.53:8776/healthcheck\": dial tcp 10.217.1.53:8776: connect: connection refused" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.373839 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448500 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448655 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.448931 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs" (OuterVolumeSpecName: "logs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449475 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449562 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.449676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") pod \"d61adc77-1750-4151-8591-10ba08713538\" (UID: \"d61adc77-1750-4151-8591-10ba08713538\") " Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.450506 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d61adc77-1750-4151-8591-10ba08713538-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.450530 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61adc77-1750-4151-8591-10ba08713538-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts" (OuterVolumeSpecName: "scripts") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460760 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.460765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd" (OuterVolumeSpecName: "kube-api-access-7pwfd") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "kube-api-access-7pwfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.479897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.528913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.529062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.536807 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data" (OuterVolumeSpecName: "config-data") pod "d61adc77-1750-4151-8591-10ba08713538" (UID: "d61adc77-1750-4151-8591-10ba08713538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.551989 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552019 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552031 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552039 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pwfd\" (UniqueName: \"kubernetes.io/projected/d61adc77-1750-4151-8591-10ba08713538-kube-api-access-7pwfd\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552048 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552058 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.552066 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61adc77-1750-4151-8591-10ba08713538-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.697152 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:17:57 crc kubenswrapper[4820]: E0221 08:17:57.697482 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:17:57 crc kubenswrapper[4820]: I0221 08:17:57.740128 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.122361 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76"} Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d61adc77-1750-4151-8591-10ba08713538","Type":"ContainerDied","Data":"3267b575fe33eb154cb252c0c5802dd5fb1d40c51fa9f2b771a061f7ac089e05"} Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124560 4820 scope.go:117] "RemoveContainer" containerID="8bbf6609d10d5dc93e4375f88a256ff522aabe92d5067d549fe9461f4682742d" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.124677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.147042 4820 scope.go:117] "RemoveContainer" containerID="93a9d17751d74c3e0a7b09cfcab6ac4dbe840d40b66c20ac5fc0464d9e678888" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.184876 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.192381 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.203277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: E0221 08:17:58.203944 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204030 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: E0221 08:17:58.204106 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204163 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204390 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api-log" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.204459 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61adc77-1750-4151-8591-10ba08713538" containerName="cinder-api" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.217575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221309 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221473 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.221606 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.232914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273701 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.273888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.274027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.274696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.275592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.275784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377191 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377207 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.377954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a23af3b4-b486-43b2-b02c-da7b8937e091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.378146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a23af3b4-b486-43b2-b02c-da7b8937e091-logs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.382873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.383557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.383624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-scripts\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384092 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.384893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a23af3b4-b486-43b2-b02c-da7b8937e091-config-data-custom\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.394432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzc7d\" (UniqueName: \"kubernetes.io/projected/a23af3b4-b486-43b2-b02c-da7b8937e091-kube-api-access-hzc7d\") pod \"cinder-api-0\" (UID: \"a23af3b4-b486-43b2-b02c-da7b8937e091\") " pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.575665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 21 08:17:58 crc kubenswrapper[4820]: I0221 08:17:58.808119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.136217 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76" exitCode=0 Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.136331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76"} Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.137883 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"9772f4127370e54ae50390f1cc9e9ad76af1eefc742acaee4df20b761c03da80"} Feb 21 08:17:59 crc kubenswrapper[4820]: I0221 08:17:59.712687 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61adc77-1750-4151-8591-10ba08713538" path="/var/lib/kubelet/pods/d61adc77-1750-4151-8591-10ba08713538/volumes" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.152052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerStarted","Data":"d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"49624a4ac27dcc18e8f13447613ba1c7177d5ffdbdd4510a0fae4e8ceb6361b7"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a23af3b4-b486-43b2-b02c-da7b8937e091","Type":"ContainerStarted","Data":"66daafda140763b507277b24604afb2a0ed31073a4413877bf805b48a90f1dc2"} Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.154623 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.176395 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99422" podStartSLOduration=2.656743832 podStartE2EDuration="5.176375744s" podCreationTimestamp="2026-02-21 08:17:55 +0000 UTC" firstStartedPulling="2026-02-21 08:17:57.103879943 +0000 UTC m=+5452.136964141" lastFinishedPulling="2026-02-21 08:17:59.623511855 +0000 UTC m=+5454.656596053" observedRunningTime="2026-02-21 08:18:00.171805311 +0000 UTC m=+5455.204889529" watchObservedRunningTime="2026-02-21 08:18:00.176375744 +0000 UTC m=+5455.209459942" Feb 21 08:18:00 crc kubenswrapper[4820]: I0221 08:18:00.200009 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.199986703 podStartE2EDuration="2.199986703s" podCreationTimestamp="2026-02-21 08:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:00.194680729 +0000 UTC m=+5455.227764927" watchObservedRunningTime="2026-02-21 08:18:00.199986703 +0000 UTC m=+5455.233070901" Feb 21 08:18:02 crc kubenswrapper[4820]: I0221 08:18:02.977579 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.056946 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.181106 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" containerID="cri-o://4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" gracePeriod=30 Feb 21 08:18:03 crc kubenswrapper[4820]: I0221 08:18:03.181435 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" containerID="cri-o://47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" gracePeriod=30 Feb 21 08:18:04 crc kubenswrapper[4820]: I0221 08:18:04.187528 4820 generic.go:334] "Generic (PLEG): container finished" podID="c820835c-1414-4968-9832-7987b99d05fc" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" exitCode=0 Feb 21 08:18:04 crc kubenswrapper[4820]: I0221 08:18:04.187877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.770835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.873273 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.873704 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.917994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.920694 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:05 crc kubenswrapper[4820]: E0221 08:18:05.926529 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.926578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: E0221 08:18:05.926608 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.926617 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.927070 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="cinder-scheduler" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.927091 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c820835c-1414-4968-9832-7987b99d05fc" containerName="probe" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.928645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.938682 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.939746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") pod \"c820835c-1414-4968-9832-7987b99d05fc\" (UID: \"c820835c-1414-4968-9832-7987b99d05fc\") " Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.940430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962570 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh" (OuterVolumeSpecName: "kube-api-access-lh4mh") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "kube-api-access-lh4mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.962666 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts" (OuterVolumeSpecName: "scripts") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:05 crc kubenswrapper[4820]: I0221 08:18:05.998624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042503 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh4mh\" (UniqueName: \"kubernetes.io/projected/c820835c-1414-4968-9832-7987b99d05fc-kube-api-access-lh4mh\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042519 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042531 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c820835c-1414-4968-9832-7987b99d05fc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042543 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.042555 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.045923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data" (OuterVolumeSpecName: "config-data") pod "c820835c-1414-4968-9832-7987b99d05fc" (UID: "c820835c-1414-4968-9832-7987b99d05fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.149599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.149966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.150154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.150732 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c820835c-1414-4968-9832-7987b99d05fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.151270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.151640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.171081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"community-operators-v9fdw\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205123 4820 generic.go:334] "Generic (PLEG): container finished" podID="c820835c-1414-4968-9832-7987b99d05fc" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" exitCode=0 Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205164 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205220 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c820835c-1414-4968-9832-7987b99d05fc","Type":"ContainerDied","Data":"5624406c32ba41e07bb32ce7d2cd8141f5f7956b50a9b42d7db5ce9a49ade7fc"} Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205249 4820 scope.go:117] "RemoveContainer" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.205742 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.233645 4820 scope.go:117] "RemoveContainer" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.260035 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.269538 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.274767 4820 scope.go:117] "RemoveContainer" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: E0221 08:18:06.277367 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": container with ID starting with 47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e not found: ID does not exist" containerID="47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.277434 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e"} err="failed to get container status \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": rpc error: code = NotFound desc = could not find container \"47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e\": container with ID starting with 47a30f7641e5ba34bb60d81231184dcdf4699f1faba4323eb5b8859761f2aa8e not found: ID does not exist" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.277456 4820 scope.go:117] "RemoveContainer" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: E0221 08:18:06.278818 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": container with ID starting with 4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38 not found: ID does not exist" containerID="4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.278843 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38"} err="failed to get container status \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": rpc error: code = NotFound desc = could not find container \"4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38\": container with ID starting with 4b006850f1f725c63d14961868cf4785c977b46f00c0708ab6448823cc8b1b38 not found: ID does not exist" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.286057 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.293671 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.294994 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.305620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.322952 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.339731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.456547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558369 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558438 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.558489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.563951 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77665b9b-37d6-4277-a75b-e30637b4b269-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.564130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-scripts\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.565082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.577973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.582802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kqj\" (UniqueName: \"kubernetes.io/projected/77665b9b-37d6-4277-a75b-e30637b4b269-kube-api-access-74kqj\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.600954 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77665b9b-37d6-4277-a75b-e30637b4b269-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77665b9b-37d6-4277-a75b-e30637b4b269\") " pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.619942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.878839 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:06 crc kubenswrapper[4820]: W0221 08:18:06.885280 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25afb423_bb97_4560_9e0a_369f39227c3f.slice/crio-7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7 WatchSource:0}: Error finding container 7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7: Status 404 returned error can't find the container with id 7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7 Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.913462 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.915606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:06 crc kubenswrapper[4820]: I0221 08:18:06.938887 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.056084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 21 08:18:07 crc kubenswrapper[4820]: W0221 08:18:07.061446 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77665b9b_37d6_4277_a75b_e30637b4b269.slice/crio-43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1 WatchSource:0}: Error finding container 43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1: Status 404 returned error can't find the container with id 43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1 Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.065695 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.065899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.066038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167636 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.167678 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.168230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.168433 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.183729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"redhat-marketplace-4qwrz\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.215514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"43fbc8666e63f30fa0c8e906934fc6830b685c395e0486369be71a57e98996e1"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220787 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" exitCode=0 Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.220969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7"} Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.252192 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.517039 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:07 crc kubenswrapper[4820]: I0221 08:18:07.708190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c820835c-1414-4968-9832-7987b99d05fc" path="/var/lib/kubelet/pods/c820835c-1414-4968-9832-7987b99d05fc/volumes" Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.279815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"da8375ea9f80c1ffa1959adfb8ff214f9098971d2e9e07de236a15a77ca439e7"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.297691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.330709 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.343993 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894" exitCode=0 Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344425 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99422" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" containerID="cri-o://d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" gracePeriod=2 Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.344526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerStarted","Data":"ba573cf9178ff188dac0401b89f49fc044a73915f950055de346dd0e475d338c"} Feb 21 08:18:08 crc kubenswrapper[4820]: I0221 08:18:08.696602 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:08 crc kubenswrapper[4820]: E0221 08:18:08.696967 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:09 crc kubenswrapper[4820]: I0221 08:18:09.504963 4820 generic.go:334] "Generic (PLEG): container finished" podID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerID="d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" exitCode=0 Feb 21 08:18:09 crc kubenswrapper[4820]: I0221 08:18:09.506268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.121050 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194217 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.194469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") pod \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\" (UID: \"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272\") " Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.196376 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities" (OuterVolumeSpecName: "utilities") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.200342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr" (OuterVolumeSpecName: "kube-api-access-rfgbr") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "kube-api-access-rfgbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.296412 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.296710 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgbr\" (UniqueName: \"kubernetes.io/projected/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-kube-api-access-rfgbr\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.297334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" (UID: "f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.397983 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.515844 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" exitCode=0 Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.515929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.519932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99422" event={"ID":"f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272","Type":"ContainerDied","Data":"1feb313f601c9a162e1860e43c020b33377995b4e9bdb70862516fbdba9a04af"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.519980 4820 scope.go:117] "RemoveContainer" containerID="d0d1bde3b34bfa6572349da9a0486262a6aa493881fab5089df6f0a0db49a44a" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.520113 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99422" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.530100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77665b9b-37d6-4277-a75b-e30637b4b269","Type":"ContainerStarted","Data":"41e2fd166793b932f0a7fbc5d95846169075bef7a9462777446e71983b472c8f"} Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.559470 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.572718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99422"] Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.576227 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.57620698 podStartE2EDuration="4.57620698s" podCreationTimestamp="2026-02-21 08:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:10.565658914 +0000 UTC m=+5465.598743122" watchObservedRunningTime="2026-02-21 08:18:10.57620698 +0000 UTC m=+5465.609291188" Feb 21 08:18:10 crc kubenswrapper[4820]: I0221 08:18:10.586549 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.621618 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.664535 4820 scope.go:117] "RemoveContainer" containerID="e558332901bffebbfbc746735e9388aca5717767185259bcf6d46e6712d87f76" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.692515 4820 scope.go:117] "RemoveContainer" containerID="bea8d0f9f9b315282e1c70c38ae69a741bedc2ad00aa6536f29e1f2864f5b481" Feb 21 08:18:11 crc kubenswrapper[4820]: I0221 08:18:11.710078 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" path="/var/lib/kubelet/pods/f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272/volumes" Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.642612 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerStarted","Data":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.644633 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868" exitCode=0 Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.644649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868"} Feb 21 08:18:13 crc kubenswrapper[4820]: I0221 08:18:13.666288 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v9fdw" podStartSLOduration=3.3001947879999998 podStartE2EDuration="8.666270227s" podCreationTimestamp="2026-02-21 08:18:05 +0000 UTC" firstStartedPulling="2026-02-21 08:18:07.223185158 +0000 UTC m=+5462.256269356" lastFinishedPulling="2026-02-21 08:18:12.589260597 +0000 UTC m=+5467.622344795" observedRunningTime="2026-02-21 08:18:13.660912622 +0000 UTC m=+5468.693996820" watchObservedRunningTime="2026-02-21 08:18:13.666270227 +0000 UTC m=+5468.699354425" Feb 21 08:18:14 crc kubenswrapper[4820]: I0221 08:18:14.656541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerStarted","Data":"41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab"} Feb 21 08:18:14 crc kubenswrapper[4820]: I0221 08:18:14.690442 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qwrz" podStartSLOduration=2.721696783 podStartE2EDuration="8.690424757s" podCreationTimestamp="2026-02-21 08:18:06 +0000 UTC" firstStartedPulling="2026-02-21 08:18:08.347368585 +0000 UTC m=+5463.380452773" lastFinishedPulling="2026-02-21 08:18:14.316096549 +0000 UTC m=+5469.349180747" observedRunningTime="2026-02-21 08:18:14.683909661 +0000 UTC m=+5469.716993869" watchObservedRunningTime="2026-02-21 08:18:14.690424757 +0000 UTC m=+5469.723508955" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.340924 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.341269 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.387351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:16 crc kubenswrapper[4820]: I0221 08:18:16.910357 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 21 08:18:17 crc kubenswrapper[4820]: I0221 08:18:17.252897 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:17 crc kubenswrapper[4820]: I0221 08:18:17.252970 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:18 crc kubenswrapper[4820]: I0221 08:18:18.302969 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" probeResult="failure" output=< Feb 21 08:18:18 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:18:18 crc kubenswrapper[4820]: > Feb 21 08:18:19 crc kubenswrapper[4820]: I0221 08:18:19.698796 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:19 crc kubenswrapper[4820]: E0221 08:18:19.699338 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.470889 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471418 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-utilities" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471442 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-utilities" Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471456 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471462 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: E0221 08:18:20.471487 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-content" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471495 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="extract-content" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.471661 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c352da-ee5d-4dc2-b5b0-ba5e0e29e272" containerName="registry-server" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.472228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.490049 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.578136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.583824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.597025 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.601807 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.601864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.630165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.703752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.704401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.727292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"glance-db-create-mkg7q\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.800379 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.807629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.808390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.808980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.828599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"glance-9c0c-account-create-update-bf5w2\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:20 crc kubenswrapper[4820]: I0221 08:18:20.918105 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.289629 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:18:21 crc kubenswrapper[4820]: W0221 08:18:21.293530 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd19c6e2c_81cf_472e_babb_fb9cf7bf052b.slice/crio-99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f WatchSource:0}: Error finding container 99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f: Status 404 returned error can't find the container with id 99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f Feb 21 08:18:21 crc kubenswrapper[4820]: W0221 08:18:21.446460 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3214fb7b_d651_4bd3_a75b_a9995693fc60.slice/crio-e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1 WatchSource:0}: Error finding container e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1: Status 404 returned error can't find the container with id e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1 Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.450257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.733859 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerStarted","Data":"3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e"} Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.733912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerStarted","Data":"99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f"} Feb 21 08:18:21 crc kubenswrapper[4820]: I0221 08:18:21.735998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerStarted","Data":"e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1"} Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.745087 4820 generic.go:334] "Generic (PLEG): container finished" podID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerID="3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e" exitCode=0 Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.745401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerDied","Data":"3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e"} Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.747403 4820 generic.go:334] "Generic (PLEG): container finished" podID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerID="d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08" exitCode=0 Feb 21 08:18:22 crc kubenswrapper[4820]: I0221 08:18:22.747454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerDied","Data":"d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.108452 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.122132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") pod \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291196 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") pod \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\" (UID: \"d19c6e2c-81cf-472e-babb-fb9cf7bf052b\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291252 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") pod \"3214fb7b-d651-4bd3-a75b-a9995693fc60\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291335 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") pod \"3214fb7b-d651-4bd3-a75b-a9995693fc60\" (UID: \"3214fb7b-d651-4bd3-a75b-a9995693fc60\") " Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.291920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d19c6e2c-81cf-472e-babb-fb9cf7bf052b" (UID: "d19c6e2c-81cf-472e-babb-fb9cf7bf052b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.292204 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3214fb7b-d651-4bd3-a75b-a9995693fc60" (UID: "3214fb7b-d651-4bd3-a75b-a9995693fc60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.296331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q" (OuterVolumeSpecName: "kube-api-access-zt22q") pod "3214fb7b-d651-4bd3-a75b-a9995693fc60" (UID: "3214fb7b-d651-4bd3-a75b-a9995693fc60"). InnerVolumeSpecName "kube-api-access-zt22q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.296424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726" (OuterVolumeSpecName: "kube-api-access-f4726") pod "d19c6e2c-81cf-472e-babb-fb9cf7bf052b" (UID: "d19c6e2c-81cf-472e-babb-fb9cf7bf052b"). InnerVolumeSpecName "kube-api-access-f4726". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393309 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4726\" (UniqueName: \"kubernetes.io/projected/d19c6e2c-81cf-472e-babb-fb9cf7bf052b-kube-api-access-f4726\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393364 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt22q\" (UniqueName: \"kubernetes.io/projected/3214fb7b-d651-4bd3-a75b-a9995693fc60-kube-api-access-zt22q\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.393375 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3214fb7b-d651-4bd3-a75b-a9995693fc60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767343 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mkg7q" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767424 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mkg7q" event={"ID":"d19c6e2c-81cf-472e-babb-fb9cf7bf052b","Type":"ContainerDied","Data":"99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.767476 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99497510b665d4e7bbb309bd521bcce1a7f023e19088b5978dfc65a0d1e7a19f" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c0c-account-create-update-bf5w2" event={"ID":"3214fb7b-d651-4bd3-a75b-a9995693fc60","Type":"ContainerDied","Data":"e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1"} Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769445 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e19c159d299106bda57dfed77736ce302d0fdce7705a5ee5fa5647398045d1" Feb 21 08:18:24 crc kubenswrapper[4820]: I0221 08:18:24.769474 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c0c-account-create-update-bf5w2" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.753526 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:25 crc kubenswrapper[4820]: E0221 08:18:25.754291 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754309 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: E0221 08:18:25.754339 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754559 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" containerName="mariadb-account-create-update" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.754582 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" containerName="mariadb-database-create" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.755297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.757111 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.757384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.765364 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:25 crc kubenswrapper[4820]: I0221 08:18:25.920719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022116 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.022163 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.026709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.041828 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.043424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.043678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"glance-db-sync-l4nch\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.074155 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.387375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.432077 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.630364 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:18:26 crc kubenswrapper[4820]: W0221 08:18:26.634730 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7 WatchSource:0}: Error finding container 2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7: Status 404 returned error can't find the container with id 2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7 Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.784209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerStarted","Data":"2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7"} Feb 21 08:18:26 crc kubenswrapper[4820]: I0221 08:18:26.784388 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v9fdw" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" containerID="cri-o://10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" gracePeriod=2 Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.098344 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25afb423_bb97_4560_9e0a_369f39227c3f.slice/crio-conmon-10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.792788 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798144 4820 generic.go:334] "Generic (PLEG): container finished" podID="25afb423-bb97-4560-9e0a-369f39227c3f" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" exitCode=0 Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798181 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9fdw" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798274 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9fdw" event={"ID":"25afb423-bb97-4560-9e0a-369f39227c3f","Type":"ContainerDied","Data":"7158af20255b2b51084f9a424f285043fe5b8746dff859ce4bb8cdb647a3f5a7"} Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.798308 4820 scope.go:117] "RemoveContainer" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.833944 4820 scope.go:117] "RemoveContainer" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.900649 4820 scope.go:117] "RemoveContainer" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.941516 4820 scope.go:117] "RemoveContainer" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.942405 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": container with ID starting with 10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b not found: ID does not exist" containerID="10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942447 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b"} err="failed to get container status \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": rpc error: code = NotFound desc = could not find container \"10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b\": container with ID starting with 10be6b023742db67679394d7b7608cd512ccb8c003be3e5bd4786bc52cf3ce9b not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942470 4820 scope.go:117] "RemoveContainer" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.942719 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": container with ID starting with 53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577 not found: ID does not exist" containerID="53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942752 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577"} err="failed to get container status \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": rpc error: code = NotFound desc = could not find container \"53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577\": container with ID starting with 53173d23164d8a991ed01d2480c6b1ab04013eec4f45c01344890736a2eb8577 not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.942773 4820 scope.go:117] "RemoveContainer" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: E0221 08:18:27.943316 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": container with ID starting with aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426 not found: ID does not exist" containerID="aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.943359 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426"} err="failed to get container status \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": rpc error: code = NotFound desc = could not find container \"aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426\": container with ID starting with aed66bef0f5872110c89072efcdab3b8b6e8b0cb4edbb6a7e467c2e9b760f426 not found: ID does not exist" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.978710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") pod \"25afb423-bb97-4560-9e0a-369f39227c3f\" (UID: \"25afb423-bb97-4560-9e0a-369f39227c3f\") " Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.979443 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities" (OuterVolumeSpecName: "utilities") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:27 crc kubenswrapper[4820]: I0221 08:18:27.990652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2" (OuterVolumeSpecName: "kube-api-access-8c7g2") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "kube-api-access-8c7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.050258 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25afb423-bb97-4560-9e0a-369f39227c3f" (UID: "25afb423-bb97-4560-9e0a-369f39227c3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081262 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081308 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c7g2\" (UniqueName: \"kubernetes.io/projected/25afb423-bb97-4560-9e0a-369f39227c3f-kube-api-access-8c7g2\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.081323 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25afb423-bb97-4560-9e0a-369f39227c3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.139292 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.148286 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v9fdw"] Feb 21 08:18:28 crc kubenswrapper[4820]: I0221 08:18:28.304905 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" probeResult="failure" output=< Feb 21 08:18:28 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:18:28 crc kubenswrapper[4820]: > Feb 21 08:18:29 crc kubenswrapper[4820]: I0221 08:18:29.742519 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" path="/var/lib/kubelet/pods/25afb423-bb97-4560-9e0a-369f39227c3f/volumes" Feb 21 08:18:33 crc kubenswrapper[4820]: I0221 08:18:33.696801 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:33 crc kubenswrapper[4820]: E0221 08:18:33.697154 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:37 crc kubenswrapper[4820]: I0221 08:18:37.299022 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:37 crc kubenswrapper[4820]: I0221 08:18:37.359908 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:38 crc kubenswrapper[4820]: I0221 08:18:38.113146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:38 crc kubenswrapper[4820]: I0221 08:18:38.897217 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qwrz" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" containerID="cri-o://41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" gracePeriod=2 Feb 21 08:18:39 crc kubenswrapper[4820]: I0221 08:18:39.908679 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerID="41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" exitCode=0 Feb 21 08:18:39 crc kubenswrapper[4820]: I0221 08:18:39.908721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.562541 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.674596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") pod \"a4169ca0-c75e-496a-9d08-a1fe753df974\" (UID: \"a4169ca0-c75e-496a-9d08-a1fe753df974\") " Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.675401 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities" (OuterVolumeSpecName: "utilities") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.679547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc" (OuterVolumeSpecName: "kube-api-access-pvvfc") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "kube-api-access-pvvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.687262 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4169ca0-c75e-496a-9d08-a1fe753df974" (UID: "a4169ca0-c75e-496a-9d08-a1fe753df974"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776838 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776873 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvvfc\" (UniqueName: \"kubernetes.io/projected/a4169ca0-c75e-496a-9d08-a1fe753df974-kube-api-access-pvvfc\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.776888 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4169ca0-c75e-496a-9d08-a1fe753df974-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qwrz" event={"ID":"a4169ca0-c75e-496a-9d08-a1fe753df974","Type":"ContainerDied","Data":"ba573cf9178ff188dac0401b89f49fc044a73915f950055de346dd0e475d338c"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947942 4820 scope.go:117] "RemoveContainer" containerID="41f7bbabb1f4aac78768cdd9f7b87492a53e7441d938d33beceedfbebbdd13ab" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.947334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qwrz" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.950719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerStarted","Data":"f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711"} Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.968364 4820 scope.go:117] "RemoveContainer" containerID="229284d09cd2a69b1c9acdbdb5c342d63af66ac1afb08bc6312d62ae998ec868" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.974649 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-l4nch" podStartSLOduration=2.241599041 podStartE2EDuration="18.974631152s" podCreationTimestamp="2026-02-21 08:18:25 +0000 UTC" firstStartedPulling="2026-02-21 08:18:26.638849593 +0000 UTC m=+5481.671933791" lastFinishedPulling="2026-02-21 08:18:43.371881684 +0000 UTC m=+5498.404965902" observedRunningTime="2026-02-21 08:18:43.969807982 +0000 UTC m=+5499.002892180" watchObservedRunningTime="2026-02-21 08:18:43.974631152 +0000 UTC m=+5499.007715350" Feb 21 08:18:43 crc kubenswrapper[4820]: I0221 08:18:43.995153 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:44 crc kubenswrapper[4820]: I0221 08:18:44.004388 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qwrz"] Feb 21 08:18:44 crc kubenswrapper[4820]: I0221 08:18:44.005897 4820 scope.go:117] "RemoveContainer" containerID="869e57f4b3b81bc2e213a6194eee05e7623a2b65d138f30982cedbf663949894" Feb 21 08:18:45 crc kubenswrapper[4820]: I0221 08:18:45.707556 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" path="/var/lib/kubelet/pods/a4169ca0-c75e-496a-9d08-a1fe753df974/volumes" Feb 21 08:18:47 crc kubenswrapper[4820]: E0221 08:18:47.545578 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-conmon-f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9668bc3_af3a_43af_8ead_9cc596776786.slice/crio-f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:18:47 crc kubenswrapper[4820]: I0221 08:18:47.987444 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9668bc3-af3a-43af-8ead-9cc596776786" containerID="f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711" exitCode=0 Feb 21 08:18:47 crc kubenswrapper[4820]: I0221 08:18:47.987503 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerDied","Data":"f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711"} Feb 21 08:18:48 crc kubenswrapper[4820]: I0221 08:18:48.697399 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:18:48 crc kubenswrapper[4820]: E0221 08:18:48.697934 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.396574 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.491901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492011 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.492126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") pod \"f9668bc3-af3a-43af-8ead-9cc596776786\" (UID: \"f9668bc3-af3a-43af-8ead-9cc596776786\") " Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.497625 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v" (OuterVolumeSpecName: "kube-api-access-dlq7v") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "kube-api-access-dlq7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.498351 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.523699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.540726 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data" (OuterVolumeSpecName: "config-data") pod "f9668bc3-af3a-43af-8ead-9cc596776786" (UID: "f9668bc3-af3a-43af-8ead-9cc596776786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594277 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlq7v\" (UniqueName: \"kubernetes.io/projected/f9668bc3-af3a-43af-8ead-9cc596776786-kube-api-access-dlq7v\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594321 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594335 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:49.594346 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9668bc3-af3a-43af-8ead-9cc596776786-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l4nch" event={"ID":"f9668bc3-af3a-43af-8ead-9cc596776786","Type":"ContainerDied","Data":"2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7"} Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004563 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5505f5eee4bd8eb9dc766daa0ad19069fe0297aec6728a729d6c5540679ff7" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.004619 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l4nch" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.491774 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492227 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492261 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492276 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492298 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492324 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="extract-utilities" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492341 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492347 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492359 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492364 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: E0221 08:18:50.492381 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492387 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="extract-content" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492572 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4169ca0-c75e-496a-9d08-a1fe753df974" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" containerName="glance-db-sync" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.492604 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="25afb423-bb97-4560-9e0a-369f39227c3f" containerName="registry-server" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.493552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.526708 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555790 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.555819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.586233 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.592834 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.595466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.595882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.604521 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.617374 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.659615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.662875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.663205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.663835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.666332 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.702230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"dnsmasq-dns-88785db75-n675s\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.719404 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.721332 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.723541 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.743752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764849 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764925 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.764965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.765313 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.837685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.867624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868512 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.868788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.871807 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.872330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.873862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.875037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.877565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.877802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.884678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.885276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.889786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.892323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.893734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"glance-default-internal-api-0\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.893911 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:50 crc kubenswrapper[4820]: I0221 08:18:50.933102 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.077372 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.377335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:18:51 crc kubenswrapper[4820]: W0221 08:18:51.647455 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f08a0d4_93f9_4236_99df_d1b3f77f9efa.slice/crio-9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e WatchSource:0}: Error finding container 9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e: Status 404 returned error can't find the container with id 9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.647606 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:51 crc kubenswrapper[4820]: I0221 08:18:51.800544 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:51 crc kubenswrapper[4820]: W0221 08:18:51.808914 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60859ade_51ea_4da8_84ac_55d16f7b01b8.slice/crio-529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68 WatchSource:0}: Error finding container 529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68: Status 404 returned error can't find the container with id 529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68 Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.053965 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" exitCode=0 Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.054026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.054053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerStarted","Data":"f827b80cd53d1809ff5c55e3c26ee1b57450c8044c04a10d2fb708ccf54ddf5e"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.057712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.059650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68"} Feb 21 08:18:52 crc kubenswrapper[4820]: I0221 08:18:52.171908 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.048669 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerStarted","Data":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121638 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" containerID="cri-o://e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" gracePeriod=30 Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.121781 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" containerID="cri-o://cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" gracePeriod=30 Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.134877 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.143362 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerStarted","Data":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.144365 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.177282 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.177263486 podStartE2EDuration="3.177263486s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:53.172659061 +0000 UTC m=+5508.205743269" watchObservedRunningTime="2026-02-21 08:18:53.177263486 +0000 UTC m=+5508.210347684" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.262454 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-88785db75-n675s" podStartSLOduration=3.2624326200000002 podStartE2EDuration="3.26243262s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:53.219744825 +0000 UTC m=+5508.252829023" watchObservedRunningTime="2026-02-21 08:18:53.26243262 +0000 UTC m=+5508.295516818" Feb 21 08:18:53 crc kubenswrapper[4820]: I0221 08:18:53.869579 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039199 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039719 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039876 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039916 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039950 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") pod \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\" (UID: \"4f08a0d4-93f9-4236-99df-d1b3f77f9efa\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.039950 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.040519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs" (OuterVolumeSpecName: "logs") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.040616 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.047203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts" (OuterVolumeSpecName: "scripts") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.048692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k" (OuterVolumeSpecName: "kube-api-access-fms2k") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "kube-api-access-fms2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.101646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.123855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data" (OuterVolumeSpecName: "config-data") pod "4f08a0d4-93f9-4236-99df-d1b3f77f9efa" (UID: "4f08a0d4-93f9-4236-99df-d1b3f77f9efa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142041 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142074 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142085 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fms2k\" (UniqueName: \"kubernetes.io/projected/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-kube-api-access-fms2k\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142095 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.142102 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f08a0d4-93f9-4236-99df-d1b3f77f9efa-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154404 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" exitCode=143 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154447 4820 generic.go:334] "Generic (PLEG): container finished" podID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" exitCode=143 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154500 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.154510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155107 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f08a0d4-93f9-4236-99df-d1b3f77f9efa","Type":"ContainerDied","Data":"9d97f6215dde897fd2aba8803c4f061570830c5ba6c118d63be295bca11e660e"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.155205 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerStarted","Data":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159696 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" containerID="cri-o://71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" gracePeriod=30 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.159787 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" containerID="cri-o://3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" gracePeriod=30 Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.187782 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.187764057 podStartE2EDuration="4.187764057s" podCreationTimestamp="2026-02-21 08:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:54.187703675 +0000 UTC m=+5509.220787883" watchObservedRunningTime="2026-02-21 08:18:54.187764057 +0000 UTC m=+5509.220848255" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.232000 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.252988 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.254736 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.255180 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255209 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} err="failed to get container status \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255229 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.255489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255513 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} err="failed to get container status \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255525 4820 scope.go:117] "RemoveContainer" containerID="cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255801 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892"} err="failed to get container status \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": rpc error: code = NotFound desc = could not find container \"cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892\": container with ID starting with cb9819a6a2af763d98923b1cf74019e736d2f201d32bc8dd20f5b04d6fe9a892 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.255824 4820 scope.go:117] "RemoveContainer" containerID="e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.256161 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9"} err="failed to get container status \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": rpc error: code = NotFound desc = could not find container \"e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9\": container with ID starting with e3d01748a0e5907b09b162654b36fced3d513cff1e09cbcec22d9bde7d8c04d9 not found: ID does not exist" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.273343 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.281930 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.282376 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282392 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: E0221 08:18:54.282421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282682 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-log" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.282722 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" containerName="glance-httpd" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.283862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.287050 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.287284 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.301603 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.446991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447442 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.447546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.549757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.550793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.551000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.555137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.555762 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.557321 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.557754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.568383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"glance-default-external-api-0\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.614782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.770370 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.957775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958423 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958641 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") pod \"60859ade-51ea-4da8-84ac-55d16f7b01b8\" (UID: \"60859ade-51ea-4da8-84ac-55d16f7b01b8\") " Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs" (OuterVolumeSpecName: "logs") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.958872 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.959250 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.959269 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60859ade-51ea-4da8-84ac-55d16f7b01b8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.965749 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65" (OuterVolumeSpecName: "kube-api-access-wlc65") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "kube-api-access-wlc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.971421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts" (OuterVolumeSpecName: "scripts") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:54 crc kubenswrapper[4820]: I0221 08:18:54.991406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.006572 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data" (OuterVolumeSpecName: "config-data") pod "60859ade-51ea-4da8-84ac-55d16f7b01b8" (UID: "60859ade-51ea-4da8-84ac-55d16f7b01b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061387 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061421 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061433 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlc65\" (UniqueName: \"kubernetes.io/projected/60859ade-51ea-4da8-84ac-55d16f7b01b8-kube-api-access-wlc65\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.061444 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60859ade-51ea-4da8-84ac-55d16f7b01b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172219 4820 generic.go:334] "Generic (PLEG): container finished" podID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" exitCode=0 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172272 4820 generic.go:334] "Generic (PLEG): container finished" podID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" exitCode=143 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172270 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60859ade-51ea-4da8-84ac-55d16f7b01b8","Type":"ContainerDied","Data":"529d4cdc81ef9e84c4a4652403c54470704bfb63f456d0296db5ab4f2fd2cd68"} Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.172342 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.184095 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: W0221 08:18:55.185692 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b012ae7_d786_413d_82ca_88448b64b4cd.slice/crio-4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9 WatchSource:0}: Error finding container 4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9: Status 404 returned error can't find the container with id 4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9 Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.243972 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.256725 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.267175 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.282482 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282500 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.282516 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282523 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282678 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-log" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.282691 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" containerName="glance-httpd" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.283588 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.285960 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.286157 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.293869 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.309309 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.311291 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.311475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} err="failed to get container status \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.311622 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: E0221 08:18:55.311975 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312063 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} err="failed to get container status \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312142 4820 scope.go:117] "RemoveContainer" containerID="3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.312495 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b"} err="failed to get container status \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": rpc error: code = NotFound desc = could not find container \"3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b\": container with ID starting with 3c83b40ffa41649566913128e4b7908aa7dc8efb96c27ff21d65487d8f99584b not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.313459 4820 scope.go:117] "RemoveContainer" containerID="71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.325630 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7"} err="failed to get container status \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": rpc error: code = NotFound desc = could not find container \"71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7\": container with ID starting with 71f235c98526dd9344f62d28f6c2872619965cfd6afdee46e68c46bf3e7954a7 not found: ID does not exist" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.469931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470227 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.470352 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.574756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.575722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.576782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.578760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.579443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.579728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.587350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.589350 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.598978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"glance-default-internal-api-0\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.615077 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.718572 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f08a0d4-93f9-4236-99df-d1b3f77f9efa" path="/var/lib/kubelet/pods/4f08a0d4-93f9-4236-99df-d1b3f77f9efa/volumes" Feb 21 08:18:55 crc kubenswrapper[4820]: I0221 08:18:55.719964 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60859ade-51ea-4da8-84ac-55d16f7b01b8" path="/var/lib/kubelet/pods/60859ade-51ea-4da8-84ac-55d16f7b01b8/volumes" Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.183008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f"} Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.183071 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9"} Feb 21 08:18:56 crc kubenswrapper[4820]: I0221 08:18:56.224193 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:18:57 crc kubenswrapper[4820]: I0221 08:18:57.194399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.204227 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.204695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerStarted","Data":"3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.205930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerStarted","Data":"8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64"} Feb 21 08:18:58 crc kubenswrapper[4820]: I0221 08:18:58.231537 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.231519357 podStartE2EDuration="4.231519357s" podCreationTimestamp="2026-02-21 08:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:58.226944024 +0000 UTC m=+5513.260028212" watchObservedRunningTime="2026-02-21 08:18:58.231519357 +0000 UTC m=+5513.264603555" Feb 21 08:18:59 crc kubenswrapper[4820]: I0221 08:18:59.239190 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.23916643 podStartE2EDuration="4.23916643s" podCreationTimestamp="2026-02-21 08:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:18:59.232333146 +0000 UTC m=+5514.265417374" watchObservedRunningTime="2026-02-21 08:18:59.23916643 +0000 UTC m=+5514.272250628" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.696906 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:00 crc kubenswrapper[4820]: E0221 08:19:00.697488 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.840483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.902369 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:00 crc kubenswrapper[4820]: I0221 08:19:00.902927 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98b448c79-xx42c" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" containerID="cri-o://6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" gracePeriod=10 Feb 21 08:19:01 crc kubenswrapper[4820]: I0221 08:19:01.971627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004612 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.004823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") pod \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\" (UID: \"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d\") " Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.018489 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl" (OuterVolumeSpecName: "kube-api-access-2shsl") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "kube-api-access-2shsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.052129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.053253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config" (OuterVolumeSpecName: "config") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.054080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.064549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" (UID: "fbe871e8-aee2-4ae2-ab24-2fd1c146d92d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106450 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106623 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2shsl\" (UniqueName: \"kubernetes.io/projected/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-kube-api-access-2shsl\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106689 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106761 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.106812 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240211 4820 generic.go:334] "Generic (PLEG): container finished" podID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" exitCode=0 Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240417 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98b448c79-xx42c" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240582 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98b448c79-xx42c" event={"ID":"fbe871e8-aee2-4ae2-ab24-2fd1c146d92d","Type":"ContainerDied","Data":"147651eae6f2d4d4506345601d2cf298cfe763874e04c3aa44b45feb488eb2f6"} Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.240686 4820 scope.go:117] "RemoveContainer" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.266641 4820 scope.go:117] "RemoveContainer" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.275178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.282936 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98b448c79-xx42c"] Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300101 4820 scope.go:117] "RemoveContainer" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: E0221 08:19:02.300541 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": container with ID starting with 6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30 not found: ID does not exist" containerID="6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300575 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30"} err="failed to get container status \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": rpc error: code = NotFound desc = could not find container \"6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30\": container with ID starting with 6a365a89bd6f9d3daff119be8f91a7435eed1407da151a034650f4ffcbc89a30 not found: ID does not exist" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300600 4820 scope.go:117] "RemoveContainer" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: E0221 08:19:02.300943 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": container with ID starting with 00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b not found: ID does not exist" containerID="00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b" Feb 21 08:19:02 crc kubenswrapper[4820]: I0221 08:19:02.300963 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b"} err="failed to get container status \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": rpc error: code = NotFound desc = could not find container \"00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b\": container with ID starting with 00cce19a089c535b378683eccf6a3fafaa907ad76d634ac3a113a5fc33bd154b not found: ID does not exist" Feb 21 08:19:03 crc kubenswrapper[4820]: I0221 08:19:03.706500 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" path="/var/lib/kubelet/pods/fbe871e8-aee2-4ae2-ab24-2fd1c146d92d/volumes" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.615887 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.615967 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.667407 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:19:04 crc kubenswrapper[4820]: I0221 08:19:04.671916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.276381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.276697 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.615479 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.615527 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.655986 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:05 crc kubenswrapper[4820]: I0221 08:19:05.660368 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:06 crc kubenswrapper[4820]: I0221 08:19:06.288642 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:06 crc kubenswrapper[4820]: I0221 08:19:06.289840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.399958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.400054 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:07 crc kubenswrapper[4820]: I0221 08:19:07.400592 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.301414 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.301726 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.502503 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:08 crc kubenswrapper[4820]: I0221 08:19:08.642564 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:19:12 crc kubenswrapper[4820]: I0221 08:19:12.698511 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:12 crc kubenswrapper[4820]: E0221 08:19:12.699268 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.631418 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:16 crc kubenswrapper[4820]: E0221 08:19:16.632524 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="init" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632544 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="init" Feb 21 08:19:16 crc kubenswrapper[4820]: E0221 08:19:16.632567 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632575 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.632771 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe871e8-aee2-4ae2-ab24-2fd1c146d92d" containerName="dnsmasq-dns" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.633548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.639097 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.640304 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.653992 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.654001 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.663282 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716810 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.716845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.817908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.817960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818043 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818093 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818826 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.818868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.838386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"placement-9480-account-create-update-bpvlj\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.844994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"placement-db-create-8fv99\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.957002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:16 crc kubenswrapper[4820]: I0221 08:19:16.964144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:17 crc kubenswrapper[4820]: I0221 08:19:17.444684 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:19:17 crc kubenswrapper[4820]: W0221 08:19:17.447567 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod549ebe18_2d08_41b5_ac23_2321a43dfe38.slice/crio-c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7 WatchSource:0}: Error finding container c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7: Status 404 returned error can't find the container with id c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7 Feb 21 08:19:17 crc kubenswrapper[4820]: I0221 08:19:17.518474 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.395691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerStarted","Data":"d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.395950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerStarted","Data":"3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397125 4820 generic.go:334] "Generic (PLEG): container finished" podID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerID="0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6" exitCode=0 Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerDied","Data":"0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.397201 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerStarted","Data":"c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7"} Feb 21 08:19:18 crc kubenswrapper[4820]: I0221 08:19:18.419686 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9480-account-create-update-bpvlj" podStartSLOduration=2.4196327220000002 podStartE2EDuration="2.419632722s" podCreationTimestamp="2026-02-21 08:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:18.411524263 +0000 UTC m=+5533.444608461" watchObservedRunningTime="2026-02-21 08:19:18.419632722 +0000 UTC m=+5533.452716920" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.405852 4820 generic.go:334] "Generic (PLEG): container finished" podID="8f96e017-4a70-45ac-9d44-b57829510e53" containerID="d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23" exitCode=0 Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.405931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerDied","Data":"d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23"} Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.727295 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.868810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") pod \"549ebe18-2d08-41b5-ac23-2321a43dfe38\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.868889 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") pod \"549ebe18-2d08-41b5-ac23-2321a43dfe38\" (UID: \"549ebe18-2d08-41b5-ac23-2321a43dfe38\") " Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.869408 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "549ebe18-2d08-41b5-ac23-2321a43dfe38" (UID: "549ebe18-2d08-41b5-ac23-2321a43dfe38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.871168 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549ebe18-2d08-41b5-ac23-2321a43dfe38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.880539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq" (OuterVolumeSpecName: "kube-api-access-g4fxq") pod "549ebe18-2d08-41b5-ac23-2321a43dfe38" (UID: "549ebe18-2d08-41b5-ac23-2321a43dfe38"). InnerVolumeSpecName "kube-api-access-g4fxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:19 crc kubenswrapper[4820]: I0221 08:19:19.973202 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4fxq\" (UniqueName: \"kubernetes.io/projected/549ebe18-2d08-41b5-ac23-2321a43dfe38-kube-api-access-g4fxq\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415386 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8fv99" event={"ID":"549ebe18-2d08-41b5-ac23-2321a43dfe38","Type":"ContainerDied","Data":"c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7"} Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415734 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ed47ae9aea8f2a72bb435e424c469e245c40de656bdf5f5a4ab097ad5a76f7" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.415406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8fv99" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.759029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.889548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") pod \"8f96e017-4a70-45ac-9d44-b57829510e53\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.889642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") pod \"8f96e017-4a70-45ac-9d44-b57829510e53\" (UID: \"8f96e017-4a70-45ac-9d44-b57829510e53\") " Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.890542 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f96e017-4a70-45ac-9d44-b57829510e53" (UID: "8f96e017-4a70-45ac-9d44-b57829510e53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.895891 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4" (OuterVolumeSpecName: "kube-api-access-qb2f4") pod "8f96e017-4a70-45ac-9d44-b57829510e53" (UID: "8f96e017-4a70-45ac-9d44-b57829510e53"). InnerVolumeSpecName "kube-api-access-qb2f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.992555 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f96e017-4a70-45ac-9d44-b57829510e53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:20 crc kubenswrapper[4820]: I0221 08:19:20.992608 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2f4\" (UniqueName: \"kubernetes.io/projected/8f96e017-4a70-45ac-9d44-b57829510e53-kube-api-access-qb2f4\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9480-account-create-update-bpvlj" event={"ID":"8f96e017-4a70-45ac-9d44-b57829510e53","Type":"ContainerDied","Data":"3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3"} Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424416 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3f93ecf4b74cdb08de62607c46381614bc6e69c6bc0134f7564be1fa5177e3" Feb 21 08:19:21 crc kubenswrapper[4820]: I0221 08:19:21.424452 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9480-account-create-update-bpvlj" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.005975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: E0221 08:19:22.006516 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: E0221 08:19:22.006587 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006594 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006835 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" containerName="mariadb-account-create-update" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.006858 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" containerName="mariadb-database-create" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.007727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.010626 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.010891 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6lbmf" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.017661 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.026997 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.028717 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.040832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.061616 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.110906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.111167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213224 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.213837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.214076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.217801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.219120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.228252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.246307 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"placement-db-sync-v696w\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316322 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.316348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.317538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.327436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.338608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"dnsmasq-dns-6bf64f4875-cnv6v\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.363926 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.824330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:19:22 crc kubenswrapper[4820]: I0221 08:19:22.914819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.453162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerStarted","Data":"5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455759 4820 generic.go:334] "Generic (PLEG): container finished" podID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerID="700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390" exitCode=0 Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.455855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerStarted","Data":"e9e0ecab29aed0ecb81b655dc50c26ef2c09f8bf912783336d03514cdc73e15c"} Feb 21 08:19:23 crc kubenswrapper[4820]: I0221 08:19:23.697298 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:23 crc kubenswrapper[4820]: E0221 08:19:23.697985 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.471890 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerStarted","Data":"208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526"} Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.472283 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:24 crc kubenswrapper[4820]: I0221 08:19:24.501010 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" podStartSLOduration=3.500985904 podStartE2EDuration="3.500985904s" podCreationTimestamp="2026-02-21 08:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:24.497962182 +0000 UTC m=+5539.531046380" watchObservedRunningTime="2026-02-21 08:19:24.500985904 +0000 UTC m=+5539.534070102" Feb 21 08:19:27 crc kubenswrapper[4820]: I0221 08:19:27.501965 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerStarted","Data":"5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077"} Feb 21 08:19:27 crc kubenswrapper[4820]: I0221 08:19:27.522049 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v696w" podStartSLOduration=2.9146010479999998 podStartE2EDuration="6.522030293s" podCreationTimestamp="2026-02-21 08:19:21 +0000 UTC" firstStartedPulling="2026-02-21 08:19:22.82805528 +0000 UTC m=+5537.861139478" lastFinishedPulling="2026-02-21 08:19:26.435484525 +0000 UTC m=+5541.468568723" observedRunningTime="2026-02-21 08:19:27.515809855 +0000 UTC m=+5542.548894053" watchObservedRunningTime="2026-02-21 08:19:27.522030293 +0000 UTC m=+5542.555114481" Feb 21 08:19:28 crc kubenswrapper[4820]: I0221 08:19:28.517581 4820 generic.go:334] "Generic (PLEG): container finished" podID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerID="5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077" exitCode=0 Feb 21 08:19:28 crc kubenswrapper[4820]: I0221 08:19:28.518010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerDied","Data":"5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077"} Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.841988 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980176 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980264 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.980465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") pod \"8ffe0144-e67b-4ea7-8212-5989f992997e\" (UID: \"8ffe0144-e67b-4ea7-8212-5989f992997e\") " Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.981096 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs" (OuterVolumeSpecName: "logs") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.986060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts" (OuterVolumeSpecName: "scripts") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:29 crc kubenswrapper[4820]: I0221 08:19:29.986084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr" (OuterVolumeSpecName: "kube-api-access-l8rlr") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "kube-api-access-l8rlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.003986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.005884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data" (OuterVolumeSpecName: "config-data") pod "8ffe0144-e67b-4ea7-8212-5989f992997e" (UID: "8ffe0144-e67b-4ea7-8212-5989f992997e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083273 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083742 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rlr\" (UniqueName: \"kubernetes.io/projected/8ffe0144-e67b-4ea7-8212-5989f992997e-kube-api-access-l8rlr\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083829 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083899 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffe0144-e67b-4ea7-8212-5989f992997e-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.083962 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffe0144-e67b-4ea7-8212-5989f992997e-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v696w" event={"ID":"8ffe0144-e67b-4ea7-8212-5989f992997e","Type":"ContainerDied","Data":"5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024"} Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538301 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb479809f1af10797af2f9da4d5f4c6b0d824de6d6f0cac15a90f617c5be024" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.538310 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v696w" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.607481 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:30 crc kubenswrapper[4820]: E0221 08:19:30.608338 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.608453 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.608737 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" containerName="placement-db-sync" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.610078 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612389 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612746 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.612647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.618212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6lbmf" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.624655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.695864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696195 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696521 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696612 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696692 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.696748 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798439 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.798756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.801957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924c1ab4-a83b-4ab0-9c80-b77489d668f7-logs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.803415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-scripts\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.804695 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-combined-ca-bundle\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.805851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-public-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.808941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-internal-tls-certs\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.809176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924c1ab4-a83b-4ab0-9c80-b77489d668f7-config-data\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.818006 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qfl2\" (UniqueName: \"kubernetes.io/projected/924c1ab4-a83b-4ab0-9c80-b77489d668f7-kube-api-access-8qfl2\") pod \"placement-64bd48f99b-s6zl2\" (UID: \"924c1ab4-a83b-4ab0-9c80-b77489d668f7\") " pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:30 crc kubenswrapper[4820]: I0221 08:19:30.966181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:31 crc kubenswrapper[4820]: I0221 08:19:31.407180 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64bd48f99b-s6zl2"] Feb 21 08:19:31 crc kubenswrapper[4820]: I0221 08:19:31.554739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"ed30b1cf65e3ff9a5ceb4764f72b7377ed1f77feba9f89be05c6adcc62d33326"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.365534 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.443716 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.444386 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-88785db75-n675s" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" containerID="cri-o://d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" gracePeriod=10 Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.566145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"7d43a2e3c545125738fb2eb30d178078f9354a6df411a70662cf7b7924b0c6e4"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.566196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64bd48f99b-s6zl2" event={"ID":"924c1ab4-a83b-4ab0-9c80-b77489d668f7","Type":"ContainerStarted","Data":"276ac3b7db1a4fa6785cd1e4803f1234f811d79abe2463e54c16d64c98d38470"} Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.567882 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.567914 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:19:32 crc kubenswrapper[4820]: I0221 08:19:32.592595 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64bd48f99b-s6zl2" podStartSLOduration=2.592569116 podStartE2EDuration="2.592569116s" podCreationTimestamp="2026-02-21 08:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:19:32.584103057 +0000 UTC m=+5547.617187255" watchObservedRunningTime="2026-02-21 08:19:32.592569116 +0000 UTC m=+5547.625653324" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.300259 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346132 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346422 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.346506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") pod \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\" (UID: \"6c743ad7-6ad8-4c83-b5fe-351c550e9495\") " Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.392456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5" (OuterVolumeSpecName: "kube-api-access-fhxr5") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "kube-api-access-fhxr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.452565 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhxr5\" (UniqueName: \"kubernetes.io/projected/6c743ad7-6ad8-4c83-b5fe-351c550e9495-kube-api-access-fhxr5\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.467711 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.526729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.536089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.537756 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config" (OuterVolumeSpecName: "config") pod "6c743ad7-6ad8-4c83-b5fe-351c550e9495" (UID: "6c743ad7-6ad8-4c83-b5fe-351c550e9495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555104 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555150 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555168 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.555184 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c743ad7-6ad8-4c83-b5fe-351c550e9495-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.575989 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" exitCode=0 Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576118 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576192 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88785db75-n675s" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-88785db75-n675s" event={"ID":"6c743ad7-6ad8-4c83-b5fe-351c550e9495","Type":"ContainerDied","Data":"f827b80cd53d1809ff5c55e3c26ee1b57450c8044c04a10d2fb708ccf54ddf5e"} Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.576228 4820 scope.go:117] "RemoveContainer" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.614637 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.618761 4820 scope.go:117] "RemoveContainer" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.625704 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-88785db75-n675s"] Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.638060 4820 scope.go:117] "RemoveContainer" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: E0221 08:19:33.639729 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": container with ID starting with d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e not found: ID does not exist" containerID="d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.639774 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e"} err="failed to get container status \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": rpc error: code = NotFound desc = could not find container \"d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e\": container with ID starting with d771da8098429bc35acbe1acd028b14110ec458bf3e079d5e864a7b8f3388a1e not found: ID does not exist" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.639802 4820 scope.go:117] "RemoveContainer" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: E0221 08:19:33.640258 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": container with ID starting with 8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7 not found: ID does not exist" containerID="8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.640292 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7"} err="failed to get container status \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": rpc error: code = NotFound desc = could not find container \"8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7\": container with ID starting with 8583da6af3352cba340e6c570da4326b3e5e000346ee7be21af99fb55ce039b7 not found: ID does not exist" Feb 21 08:19:33 crc kubenswrapper[4820]: I0221 08:19:33.707094 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" path="/var/lib/kubelet/pods/6c743ad7-6ad8-4c83-b5fe-351c550e9495/volumes" Feb 21 08:19:35 crc kubenswrapper[4820]: I0221 08:19:35.703748 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:35 crc kubenswrapper[4820]: E0221 08:19:35.704517 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:46 crc kubenswrapper[4820]: I0221 08:19:46.697154 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:46 crc kubenswrapper[4820]: E0221 08:19:46.698088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:19:58 crc kubenswrapper[4820]: I0221 08:19:58.696550 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:19:58 crc kubenswrapper[4820]: E0221 08:19:58.697341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:20:02 crc kubenswrapper[4820]: I0221 08:20:02.016785 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:20:02 crc kubenswrapper[4820]: I0221 08:20:02.018276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64bd48f99b-s6zl2" Feb 21 08:20:12 crc kubenswrapper[4820]: I0221 08:20:12.697217 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:20:12 crc kubenswrapper[4820]: E0221 08:20:12.697925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:20:23 crc kubenswrapper[4820]: I0221 08:20:23.696790 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:20:23 crc kubenswrapper[4820]: I0221 08:20:23.982481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.565976 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:25 crc kubenswrapper[4820]: E0221 08:20:25.566889 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="init" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.566904 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="init" Feb 21 08:20:25 crc kubenswrapper[4820]: E0221 08:20:25.566929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.566938 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.567136 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c743ad7-6ad8-4c83-b5fe-351c550e9495" containerName="dnsmasq-dns" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.567847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.587981 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.661529 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.663774 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.674008 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.759818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.759897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.766702 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.767894 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.777788 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.779552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861724 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.861492 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.862977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.863419 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.874092 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.890129 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"nova-api-db-create-48s57\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963799 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963834 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963885 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963932 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.963965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.964752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.970667 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.972393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.975046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 21 08:20:25 crc kubenswrapper[4820]: I0221 08:20:25.984678 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:25.997927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"nova-cell0-db-create-cszw4\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.065994 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.066058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.066976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.067791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.083062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"nova-api-9cdf-account-create-update-r2dfp\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.086479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"nova-cell1-db-create-rllks\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.087501 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.168211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.168379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.172159 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.173741 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.175983 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.183629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.187406 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.198406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.271249 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.271602 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.272950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.282275 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.295212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"nova-cell0-7934-account-create-update-tq229\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.377758 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.377868 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.481881 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.482312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.483169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.509851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"nova-cell1-9237-account-create-update-4lj2f\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.556482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:26 crc kubenswrapper[4820]: W0221 08:20:26.591887 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245926d7_e415_4af9_b793_9546bb73dc0c.slice/crio-319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3 WatchSource:0}: Error finding container 319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3: Status 404 returned error can't find the container with id 319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3 Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.594262 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.597783 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.760564 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.800741 4820 scope.go:117] "RemoveContainer" containerID="8c0fb447700e63fa48262f2548cda06bf12aed24885e176faa0195a336f5334d" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.849729 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.854991 4820 scope.go:117] "RemoveContainer" containerID="bdd13cb8dd27e6491e6118d0d26b3e20fbbf9ce4646a106c500112e253d46472" Feb 21 08:20:26 crc kubenswrapper[4820]: I0221 08:20:26.922373 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.018313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerStarted","Data":"b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.020731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerStarted","Data":"e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.023394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.024934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerStarted","Data":"596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.024980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerStarted","Data":"319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.031445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerStarted","Data":"1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba"} Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.048015 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" podStartSLOduration=2.047991102 podStartE2EDuration="2.047991102s" podCreationTimestamp="2026-02-21 08:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:20:27.043251663 +0000 UTC m=+5602.076335871" watchObservedRunningTime="2026-02-21 08:20:27.047991102 +0000 UTC m=+5602.081075300" Feb 21 08:20:27 crc kubenswrapper[4820]: I0221 08:20:27.219160 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:20:27 crc kubenswrapper[4820]: W0221 08:20:27.262310 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10066581_0763_4940_bcba_cdd983819ef7.slice/crio-be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f WatchSource:0}: Error finding container be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f: Status 404 returned error can't find the container with id be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.042843 4820 generic.go:334] "Generic (PLEG): container finished" podID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerID="d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.043174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerDied","Data":"d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.048754 4820 generic.go:334] "Generic (PLEG): container finished" podID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerID="4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.048809 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerDied","Data":"4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.050727 4820 generic.go:334] "Generic (PLEG): container finished" podID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerID="112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.050816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerDied","Data":"112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.052806 4820 generic.go:334] "Generic (PLEG): container finished" podID="245926d7-e415-4af9-b793-9546bb73dc0c" containerID="596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.052898 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerDied","Data":"596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054779 4820 generic.go:334] "Generic (PLEG): container finished" podID="10066581-0763-4940-bcba-cdd983819ef7" containerID="7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054849 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerDied","Data":"7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.054875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerStarted","Data":"be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056777 4820 generic.go:334] "Generic (PLEG): container finished" podID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerID="0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829" exitCode=0 Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerDied","Data":"0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829"} Feb 21 08:20:28 crc kubenswrapper[4820]: I0221 08:20:28.056869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerStarted","Data":"0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7"} Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.455677 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.544847 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") pod \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.545066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") pod \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\" (UID: \"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.546102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" (UID: "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.557517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn" (OuterVolumeSpecName: "kube-api-access-rq2xn") pod "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" (UID: "1a418ce3-1a88-442d-9c0a-3aea9ad0cc51"). InnerVolumeSpecName "kube-api-access-rq2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.647170 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.647203 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq2xn\" (UniqueName: \"kubernetes.io/projected/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51-kube-api-access-rq2xn\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.653653 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.663555 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.674903 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.695428 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.718005 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:29 crc kubenswrapper[4820]: E0221 08:20:29.810601 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a418ce3_1a88_442d_9c0a_3aea9ad0cc51.slice\": RecentStats: unable to find data in memory cache]" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") pod \"245926d7-e415-4af9-b793-9546bb73dc0c\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") pod \"10066581-0763-4940-bcba-cdd983819ef7\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851453 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") pod \"245926d7-e415-4af9-b793-9546bb73dc0c\" (UID: \"245926d7-e415-4af9-b793-9546bb73dc0c\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") pod \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") pod \"e47106ba-9033-418d-a248-6f7ee03d05e6\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851537 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") pod \"10066581-0763-4940-bcba-cdd983819ef7\" (UID: \"10066581-0763-4940-bcba-cdd983819ef7\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851579 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") pod \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\" (UID: \"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") pod \"e47106ba-9033-418d-a248-6f7ee03d05e6\" (UID: \"e47106ba-9033-418d-a248-6f7ee03d05e6\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851690 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") pod \"96717fc4-053b-4426-ab50-dc0786c2eb7e\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851709 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") pod \"96717fc4-053b-4426-ab50-dc0786c2eb7e\" (UID: \"96717fc4-053b-4426-ab50-dc0786c2eb7e\") " Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.851888 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10066581-0763-4940-bcba-cdd983819ef7" (UID: "10066581-0763-4940-bcba-cdd983819ef7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852283 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e47106ba-9033-418d-a248-6f7ee03d05e6" (UID: "e47106ba-9033-418d-a248-6f7ee03d05e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852372 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96717fc4-053b-4426-ab50-dc0786c2eb7e" (UID: "96717fc4-053b-4426-ab50-dc0786c2eb7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "245926d7-e415-4af9-b793-9546bb73dc0c" (UID: "245926d7-e415-4af9-b793-9546bb73dc0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" (UID: "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852866 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10066581-0763-4940-bcba-cdd983819ef7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852931 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/245926d7-e415-4af9-b793-9546bb73dc0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.852989 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.853043 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47106ba-9033-418d-a248-6f7ee03d05e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.853115 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96717fc4-053b-4426-ab50-dc0786c2eb7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf" (OuterVolumeSpecName: "kube-api-access-5npmf") pod "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" (UID: "77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b"). InnerVolumeSpecName "kube-api-access-5npmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq" (OuterVolumeSpecName: "kube-api-access-6nksq") pod "96717fc4-053b-4426-ab50-dc0786c2eb7e" (UID: "96717fc4-053b-4426-ab50-dc0786c2eb7e"). InnerVolumeSpecName "kube-api-access-6nksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz" (OuterVolumeSpecName: "kube-api-access-j6hlz") pod "e47106ba-9033-418d-a248-6f7ee03d05e6" (UID: "e47106ba-9033-418d-a248-6f7ee03d05e6"). InnerVolumeSpecName "kube-api-access-j6hlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.855885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4" (OuterVolumeSpecName: "kube-api-access-r2vf4") pod "10066581-0763-4940-bcba-cdd983819ef7" (UID: "10066581-0763-4940-bcba-cdd983819ef7"). InnerVolumeSpecName "kube-api-access-r2vf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.858722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn" (OuterVolumeSpecName: "kube-api-access-m8wtn") pod "245926d7-e415-4af9-b793-9546bb73dc0c" (UID: "245926d7-e415-4af9-b793-9546bb73dc0c"). InnerVolumeSpecName "kube-api-access-m8wtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956538 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wtn\" (UniqueName: \"kubernetes.io/projected/245926d7-e415-4af9-b793-9546bb73dc0c-kube-api-access-m8wtn\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956575 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6hlz\" (UniqueName: \"kubernetes.io/projected/e47106ba-9033-418d-a248-6f7ee03d05e6-kube-api-access-j6hlz\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956585 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2vf4\" (UniqueName: \"kubernetes.io/projected/10066581-0763-4940-bcba-cdd983819ef7-kube-api-access-r2vf4\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956593 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5npmf\" (UniqueName: \"kubernetes.io/projected/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b-kube-api-access-5npmf\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:29 crc kubenswrapper[4820]: I0221 08:20:29.956602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nksq\" (UniqueName: \"kubernetes.io/projected/96717fc4-053b-4426-ab50-dc0786c2eb7e-kube-api-access-6nksq\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rllks" event={"ID":"e47106ba-9033-418d-a248-6f7ee03d05e6","Type":"ContainerDied","Data":"1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083890 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6bce569e1e07c17cca1b809961f87cd773e10900559b4307547ed148c330ba" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.083641 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rllks" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087272 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cszw4" event={"ID":"96717fc4-053b-4426-ab50-dc0786c2eb7e","Type":"ContainerDied","Data":"b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087371 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28a5d09f7c8c35963057eb1b5755c1348789fd11aa98c71600295fa51311131" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.087460 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cszw4" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091626 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-48s57" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-48s57" event={"ID":"77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b","Type":"ContainerDied","Data":"e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.091704 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3efae41380277c8b69eefd69f6f397f096d20a162b9fb48372fabb1fc853492" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.100465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" event={"ID":"245926d7-e415-4af9-b793-9546bb73dc0c","Type":"ContainerDied","Data":"319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.100642 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319e9c12dfe25c15c9c8ef35e203fae59520d467954a98a94de7854fb5c587e3" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102137 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9cdf-account-create-update-r2dfp" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102761 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7934-account-create-update-tq229" event={"ID":"10066581-0763-4940-bcba-cdd983819ef7","Type":"ContainerDied","Data":"be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.102800 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9e87d5972aa510b52af627215fb09a87576b287e54cee7b2c283ac3cba663f" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.103485 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7934-account-create-update-tq229" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105574 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" event={"ID":"1a418ce3-1a88-442d-9c0a-3aea9ad0cc51","Type":"ContainerDied","Data":"0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7"} Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105610 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b87d522639e04e72af8d34d9124b4a57eb45c119e4e1cde1e5d5dbfbfa526f7" Feb 21 08:20:30 crc kubenswrapper[4820]: I0221 08:20:30.105689 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9237-account-create-update-4lj2f" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.884725 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885525 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885540 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885559 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885567 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885582 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885589 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885619 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885638 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885645 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: E0221 08:20:35.885658 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885665 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885880 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885900 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10066581-0763-4940-bcba-cdd983819ef7" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885912 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885924 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" containerName="mariadb-account-create-update" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885939 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.885950 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" containerName="mariadb-database-create" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.886773 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.892812 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.893022 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.893149 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jgh78" Feb 21 08:20:35 crc kubenswrapper[4820]: I0221 08:20:35.897056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000361 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.000943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.001179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.103742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.103895 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.104955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.105012 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.110856 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.111116 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.111383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.128910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"nova-cell0-conductor-db-sync-kjc5t\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.217181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:36 crc kubenswrapper[4820]: I0221 08:20:36.682131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:20:37 crc kubenswrapper[4820]: I0221 08:20:37.168955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerStarted","Data":"06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d"} Feb 21 08:20:46 crc kubenswrapper[4820]: I0221 08:20:46.257923 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerStarted","Data":"cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737"} Feb 21 08:20:46 crc kubenswrapper[4820]: I0221 08:20:46.272308 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" podStartSLOduration=2.176982809 podStartE2EDuration="11.272229947s" podCreationTimestamp="2026-02-21 08:20:35 +0000 UTC" firstStartedPulling="2026-02-21 08:20:36.688750799 +0000 UTC m=+5611.721834987" lastFinishedPulling="2026-02-21 08:20:45.783997927 +0000 UTC m=+5620.817082125" observedRunningTime="2026-02-21 08:20:46.270554792 +0000 UTC m=+5621.303638990" watchObservedRunningTime="2026-02-21 08:20:46.272229947 +0000 UTC m=+5621.305314145" Feb 21 08:20:52 crc kubenswrapper[4820]: I0221 08:20:52.306904 4820 generic.go:334] "Generic (PLEG): container finished" podID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerID="cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737" exitCode=0 Feb 21 08:20:52 crc kubenswrapper[4820]: I0221 08:20:52.307000 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerDied","Data":"cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737"} Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.593112 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.743535 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.743771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.744023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.744088 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") pod \"2ae13708-c06f-4967-901f-8ea42fdca38c\" (UID: \"2ae13708-c06f-4967-901f-8ea42fdca38c\") " Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.750595 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts" (OuterVolumeSpecName: "scripts") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.752606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9" (OuterVolumeSpecName: "kube-api-access-kb2r9") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "kube-api-access-kb2r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.770045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.774351 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data" (OuterVolumeSpecName: "config-data") pod "2ae13708-c06f-4967-901f-8ea42fdca38c" (UID: "2ae13708-c06f-4967-901f-8ea42fdca38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846671 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846722 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846738 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2r9\" (UniqueName: \"kubernetes.io/projected/2ae13708-c06f-4967-901f-8ea42fdca38c-kube-api-access-kb2r9\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:53 crc kubenswrapper[4820]: I0221 08:20:53.846749 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ae13708-c06f-4967-901f-8ea42fdca38c-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" event={"ID":"2ae13708-c06f-4967-901f-8ea42fdca38c","Type":"ContainerDied","Data":"06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d"} Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324757 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cc6f9763368b24b66c6c8f88386e1fb22aafbf05dac97365b54086e06e2e4d" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.324809 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kjc5t" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.415179 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:54 crc kubenswrapper[4820]: E0221 08:20:54.415891 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.415910 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.416107 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" containerName="nova-cell0-conductor-db-sync" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.416745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.420214 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.420352 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jgh78" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.427292 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.559893 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.661931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.666902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.676234 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.678698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"nova-cell0-conductor-0\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:54 crc kubenswrapper[4820]: I0221 08:20:54.744941 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:55 crc kubenswrapper[4820]: I0221 08:20:55.201479 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 08:20:55 crc kubenswrapper[4820]: I0221 08:20:55.333698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerStarted","Data":"e739d22e8a5fb67dd8a38933da1b7cdbf628d65d406c279afc479f8a5e13a79c"} Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.341704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerStarted","Data":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.343996 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 08:20:56 crc kubenswrapper[4820]: I0221 08:20:56.371689 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.371665635 podStartE2EDuration="2.371665635s" podCreationTimestamp="2026-02-21 08:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:20:56.36520565 +0000 UTC m=+5631.398289858" watchObservedRunningTime="2026-02-21 08:20:56.371665635 +0000 UTC m=+5631.404749833" Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.039255 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.050778 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-l4whm"] Feb 21 08:21:01 crc kubenswrapper[4820]: I0221 08:21:01.708397 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d64f747-d529-4e8f-b2ea-11458f16f00c" path="/var/lib/kubelet/pods/8d64f747-d529-4e8f-b2ea-11458f16f00c/volumes" Feb 21 08:21:02 crc kubenswrapper[4820]: I0221 08:21:02.030406 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:21:02 crc kubenswrapper[4820]: I0221 08:21:02.039826 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a50c-account-create-update-p6g4x"] Feb 21 08:21:03 crc kubenswrapper[4820]: I0221 08:21:03.706897 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41e7890-6ac4-4d64-aded-2e5934d7ceee" path="/var/lib/kubelet/pods/e41e7890-6ac4-4d64-aded-2e5934d7ceee/volumes" Feb 21 08:21:04 crc kubenswrapper[4820]: I0221 08:21:04.769286 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.367739 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.369806 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.371613 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.371958 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.379580 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.467977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468413 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.468629 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.534586 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.536067 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.539483 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.561097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.570690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.576860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.577646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.584164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.616883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"nova-cell0-cell-mapping-lwzsj\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.640378 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.641647 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.648737 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.656866 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.672709 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.717276 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.863969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864518 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.864614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.872836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.883261 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.899658 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.901512 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.901620 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.908382 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.910712 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.910982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.912128 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.919780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.931631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.938099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.940335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.962739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " pod="openstack/nova-api-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966673 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.966752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967503 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967742 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.967778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968515 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.968560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.977149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.977568 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:05 crc kubenswrapper[4820]: I0221 08:21:05.993557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:05.998695 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.003513 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"nova-scheduler-0\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.009333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070290 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.070577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.073975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074054 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.074098 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.076742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.077267 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.081130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.085717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.091168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.091619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.095851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"nova-metadata-0\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.115484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"dnsmasq-dns-65d6fd5f6f-tvl89\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.118484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"nova-cell1-novncproxy-0\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.147012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.153317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.339777 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.351791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.370876 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.482016 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.645156 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.647161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.656604 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.657065 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.667840 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.737257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.803652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.804005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.804064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.805574 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.807169 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.907529 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.908421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.912652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.912663 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.927075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:06 crc kubenswrapper[4820]: I0221 08:21:06.927442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wf76m\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.077313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.087198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.093447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.098698 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod543eb7a9_5b1a_407b_a035_86d3fb8bd55c.slice/crio-5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d WatchSource:0}: Error finding container 5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d: Status 404 returned error can't find the container with id 5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.099202 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991 WatchSource:0}: Error finding container 3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991: Status 404 returned error can't find the container with id 3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991 Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.160047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.465920 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerStarted","Data":"99c4d061985b5004dc504e31adc8b10c206eef34bacb665b33bf678fab276fd0"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467558 4820 generic.go:334] "Generic (PLEG): container finished" podID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" exitCode=0 Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.467635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerStarted","Data":"5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.474431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerStarted","Data":"5caa9a6ee200ac7417238c6c1cc223745c163ea2c319bd460f9791be7f091ca4"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.476605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.489756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"c6cf6173ff4cbbbdf79ca4f92812a53845b01aed980187b59b47e17fad3eb8ae"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.502069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerStarted","Data":"401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.502396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerStarted","Data":"c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc"} Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.520494 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lwzsj" podStartSLOduration=2.520474156 podStartE2EDuration="2.520474156s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:07.519562411 +0000 UTC m=+5642.552646619" watchObservedRunningTime="2026-02-21 08:21:07.520474156 +0000 UTC m=+5642.553558344" Feb 21 08:21:07 crc kubenswrapper[4820]: I0221 08:21:07.624472 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:21:07 crc kubenswrapper[4820]: W0221 08:21:07.686921 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ace6b1_75c4_451e_b167_1dbe9b2471ca.slice/crio-fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960 WatchSource:0}: Error finding container fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960: Status 404 returned error can't find the container with id fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960 Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.513771 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerStarted","Data":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.514074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.519133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerStarted","Data":"fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960"} Feb 21 08:21:08 crc kubenswrapper[4820]: I0221 08:21:08.541044 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" podStartSLOduration=3.541024098 podStartE2EDuration="3.541024098s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:08.533692499 +0000 UTC m=+5643.566776707" watchObservedRunningTime="2026-02-21 08:21:08.541024098 +0000 UTC m=+5643.574108286" Feb 21 08:21:09 crc kubenswrapper[4820]: I0221 08:21:09.657971 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:09 crc kubenswrapper[4820]: I0221 08:21:09.674555 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.543269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.543558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerStarted","Data":"17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.546074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerStarted","Data":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.546192 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.549053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerStarted","Data":"a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.550794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerStarted","Data":"bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerStarted","Data":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561309 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" containerID="cri-o://16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.561348 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" containerID="cri-o://17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" gracePeriod=30 Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.574892 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.73170202 podStartE2EDuration="5.574867227s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:06.755755514 +0000 UTC m=+5641.788839712" lastFinishedPulling="2026-02-21 08:21:09.598920721 +0000 UTC m=+5644.632004919" observedRunningTime="2026-02-21 08:21:10.568998169 +0000 UTC m=+5645.602082367" watchObservedRunningTime="2026-02-21 08:21:10.574867227 +0000 UTC m=+5645.607951425" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.592900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wf76m" podStartSLOduration=4.592873334 podStartE2EDuration="4.592873334s" podCreationTimestamp="2026-02-21 08:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:10.587442367 +0000 UTC m=+5645.620526585" watchObservedRunningTime="2026-02-21 08:21:10.592873334 +0000 UTC m=+5645.625957532" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.623549 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.139332759 podStartE2EDuration="5.623528824s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:07.109934937 +0000 UTC m=+5642.143019135" lastFinishedPulling="2026-02-21 08:21:09.594131002 +0000 UTC m=+5644.627215200" observedRunningTime="2026-02-21 08:21:10.610378548 +0000 UTC m=+5645.643462746" watchObservedRunningTime="2026-02-21 08:21:10.623528824 +0000 UTC m=+5645.656613022" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.642915 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.154031487 podStartE2EDuration="5.642889388s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:07.11005507 +0000 UTC m=+5642.143139268" lastFinishedPulling="2026-02-21 08:21:09.598912971 +0000 UTC m=+5644.631997169" observedRunningTime="2026-02-21 08:21:10.634950983 +0000 UTC m=+5645.668035181" watchObservedRunningTime="2026-02-21 08:21:10.642889388 +0000 UTC m=+5645.675973586" Feb 21 08:21:10 crc kubenswrapper[4820]: I0221 08:21:10.685333 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.917787255 podStartE2EDuration="5.685305415s" podCreationTimestamp="2026-02-21 08:21:05 +0000 UTC" firstStartedPulling="2026-02-21 08:21:06.816472737 +0000 UTC m=+5641.849556935" lastFinishedPulling="2026-02-21 08:21:09.583990897 +0000 UTC m=+5644.617075095" observedRunningTime="2026-02-21 08:21:10.653675249 +0000 UTC m=+5645.686759457" watchObservedRunningTime="2026-02-21 08:21:10.685305415 +0000 UTC m=+5645.718389623" Feb 21 08:21:10 crc kubenswrapper[4820]: E0221 08:21:10.860711 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-conmon-17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043ca807_c45c_45f9_b058_8979413aeac6.slice/crio-conmon-16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.147758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.203707 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.319887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.319935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.320152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.320185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") pod \"043ca807-c45c-45f9-b058-8979413aeac6\" (UID: \"043ca807-c45c-45f9-b058-8979413aeac6\") " Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.321025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs" (OuterVolumeSpecName: "logs") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.325170 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6" (OuterVolumeSpecName: "kube-api-access-qt2m6") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "kube-api-access-qt2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.344600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.346873 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data" (OuterVolumeSpecName: "config-data") pod "043ca807-c45c-45f9-b058-8979413aeac6" (UID: "043ca807-c45c-45f9-b058-8979413aeac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.353018 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422872 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422907 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043ca807-c45c-45f9-b058-8979413aeac6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422927 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt2m6\" (UniqueName: \"kubernetes.io/projected/043ca807-c45c-45f9-b058-8979413aeac6-kube-api-access-qt2m6\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.422973 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043ca807-c45c-45f9-b058-8979413aeac6-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.571122 4820 generic.go:334] "Generic (PLEG): container finished" podID="043ca807-c45c-45f9-b058-8979413aeac6" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" exitCode=0 Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.571162 4820 generic.go:334] "Generic (PLEG): container finished" podID="043ca807-c45c-45f9-b058-8979413aeac6" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" exitCode=143 Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.572380 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.574613 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575361 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"043ca807-c45c-45f9-b058-8979413aeac6","Type":"ContainerDied","Data":"3b980e8732b601b80d1ffcf398981157b065cd2bad0e95ebcee3ba1c15b52991"} Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.575384 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.597755 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.643557 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.655489 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.656016 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656067 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} err="failed to get container status \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656103 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.656421 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656477 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} err="failed to get container status \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.656509 4820 scope.go:117] "RemoveContainer" containerID="17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664178 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a"} err="failed to get container status \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": rpc error: code = NotFound desc = could not find container \"17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a\": container with ID starting with 17755fd5b3859225e8711b8be0348410d065e20fef36b210aac4f0272286ce5a not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664248 4820 scope.go:117] "RemoveContainer" containerID="16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.664715 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5"} err="failed to get container status \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": rpc error: code = NotFound desc = could not find container \"16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5\": container with ID starting with 16aa24c0fa8115a647eed1e2f9be8ac01ab6229c2fe53c0d7e83fbea8405ecf5 not found: ID does not exist" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.672076 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.688493 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.689074 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689089 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: E0221 08:21:11.689101 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689107 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689352 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-log" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.689372 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="043ca807-c45c-45f9-b058-8979413aeac6" containerName="nova-metadata-metadata" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.691008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.696209 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.696576 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.719229 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043ca807-c45c-45f9-b058-8979413aeac6" path="/var/lib/kubelet/pods/043ca807-c45c-45f9-b058-8979413aeac6/volumes" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.719837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738079 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.738270 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.839960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.840072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.840104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.841049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.858630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.860402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.861190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:11 crc kubenswrapper[4820]: I0221 08:21:11.878876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " pod="openstack/nova-metadata-0" Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.018643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:12 crc kubenswrapper[4820]: W0221 08:21:12.509177 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab47881d_31b3_45fa_bc72_fce64a00567c.slice/crio-bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397 WatchSource:0}: Error finding container bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397: Status 404 returned error can't find the container with id bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397 Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.524382 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.591785 4820 generic.go:334] "Generic (PLEG): container finished" podID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerID="401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76" exitCode=0 Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.591882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerDied","Data":"401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76"} Feb 21 08:21:12 crc kubenswrapper[4820]: I0221 08:21:12.601303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.612020 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.612068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerStarted","Data":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.614500 4820 generic.go:334] "Generic (PLEG): container finished" podID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerID="a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60" exitCode=0 Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.614727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerDied","Data":"a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60"} Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.651583 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.651560062 podStartE2EDuration="2.651560062s" podCreationTimestamp="2026-02-21 08:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:13.638888119 +0000 UTC m=+5648.671972317" watchObservedRunningTime="2026-02-21 08:21:13.651560062 +0000 UTC m=+5648.684644260" Feb 21 08:21:13 crc kubenswrapper[4820]: I0221 08:21:13.987950 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.084713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.091585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts" (OuterVolumeSpecName: "scripts") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.093490 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj" (OuterVolumeSpecName: "kube-api-access-9mhzj") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "kube-api-access-9mhzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: E0221 08:21:14.110851 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data podName:52c86e8d-fde8-46e2-856f-10b3444f1ed7 nodeName:}" failed. No retries permitted until 2026-02-21 08:21:14.610827728 +0000 UTC m=+5649.643911926 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7") : error deleting /var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volume-subpaths: remove /var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volume-subpaths: no such file or directory Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.113409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187141 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187180 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhzj\" (UniqueName: \"kubernetes.io/projected/52c86e8d-fde8-46e2-856f-10b3444f1ed7-kube-api-access-9mhzj\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.187193 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.629751 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lwzsj" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.630267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lwzsj" event={"ID":"52c86e8d-fde8-46e2-856f-10b3444f1ed7","Type":"ContainerDied","Data":"c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc"} Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.630302 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ad7c05678bee154a5231477a5e3c8eb4dd07e5941382838f63cb24895b8bcc" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.697158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") pod \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\" (UID: \"52c86e8d-fde8-46e2-856f-10b3444f1ed7\") " Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.700966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data" (OuterVolumeSpecName: "config-data") pod "52c86e8d-fde8-46e2-856f-10b3444f1ed7" (UID: "52c86e8d-fde8-46e2-856f-10b3444f1ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.812946 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52c86e8d-fde8-46e2-856f-10b3444f1ed7-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.973776 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.974043 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" containerID="cri-o://17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" gracePeriod=30 Feb 21 08:21:14 crc kubenswrapper[4820]: I0221 08:21:14.974482 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" containerID="cri-o://9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.002846 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.003069 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" containerID="cri-o://bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.017580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.047332 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.058255 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-spcxr"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.098338 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.118981 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.119345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.124043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4" (OuterVolumeSpecName: "kube-api-access-2rcb4") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "kube-api-access-2rcb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.150135 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.221343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.221386 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") pod \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\" (UID: \"36ace6b1-75c4-451e-b167-1dbe9b2471ca\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.222132 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.222177 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rcb4\" (UniqueName: \"kubernetes.io/projected/36ace6b1-75c4-451e-b167-1dbe9b2471ca-kube-api-access-2rcb4\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.224908 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts" (OuterVolumeSpecName: "scripts") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.244176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data" (OuterVolumeSpecName: "config-data") pod "36ace6b1-75c4-451e-b167-1dbe9b2471ca" (UID: "36ace6b1-75c4-451e-b167-1dbe9b2471ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.323747 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.323785 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ace6b1-75c4-451e-b167-1dbe9b2471ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.639677 4820 generic.go:334] "Generic (PLEG): container finished" podID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerID="9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" exitCode=0 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.640047 4820 generic.go:334] "Generic (PLEG): container finished" podID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerID="17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" exitCode=143 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.639806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.640150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wf76m" event={"ID":"36ace6b1-75c4-451e-b167-1dbe9b2471ca","Type":"ContainerDied","Data":"fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960"} Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641765 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcef1c95189e1d1cbe83f797897774c231b278e47ff1327f8aa9600ea97cd960" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641809 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" containerID="cri-o://6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.641932 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wf76m" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.642044 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" containerID="cri-o://e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" gracePeriod=30 Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.715850 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211ff6a9-0360-4606-92ca-cd4904494ff6" path="/var/lib/kubelet/pods/211ff6a9-0360-4606-92ca-cd4904494ff6/volumes" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.744422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.744897 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746261 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746283 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746289 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746304 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746310 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: E0221 08:21:15.746342 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746348 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746523 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" containerName="nova-cell1-conductor-db-sync" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746536 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" containerName="nova-manage" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746545 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-log" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.746558 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" containerName="nova-api-api" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.747356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.749356 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.755835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954388 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954491 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") pod \"8123556f-a4ef-4790-ba20-d4b536407aa4\" (UID: \"8123556f-a4ef-4790-ba20-d4b536407aa4\") " Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954952 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs" (OuterVolumeSpecName: "logs") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.954975 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.955633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.956017 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.956250 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8123556f-a4ef-4790-ba20-d4b536407aa4-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.960076 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb" (OuterVolumeSpecName: "kube-api-access-f68fb") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "kube-api-access-f68fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.981410 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:15 crc kubenswrapper[4820]: I0221 08:21:15.990569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data" (OuterVolumeSpecName: "config-data") pod "8123556f-a4ef-4790-ba20-d4b536407aa4" (UID: "8123556f-a4ef-4790-ba20-d4b536407aa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056892 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.056980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057038 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f68fb\" (UniqueName: \"kubernetes.io/projected/8123556f-a4ef-4790-ba20-d4b536407aa4-kube-api-access-f68fb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057050 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.057058 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8123556f-a4ef-4790-ba20-d4b536407aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.066724 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.072930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.081858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"nova-cell1-conductor-0\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.231665 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.259736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs" (OuterVolumeSpecName: "logs") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") pod \"ab47881d-31b3-45fa-bc72-fce64a00567c\" (UID: \"ab47881d-31b3-45fa-bc72-fce64a00567c\") " Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.260657 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab47881d-31b3-45fa-bc72-fce64a00567c-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.264350 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch" (OuterVolumeSpecName: "kube-api-access-rwmch") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "kube-api-access-rwmch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.282878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.283980 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data" (OuterVolumeSpecName: "config-data") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.305535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab47881d-31b3-45fa-bc72-fce64a00567c" (UID: "ab47881d-31b3-45fa-bc72-fce64a00567c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361884 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361923 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmch\" (UniqueName: \"kubernetes.io/projected/ab47881d-31b3-45fa-bc72-fce64a00567c-kube-api-access-rwmch\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.361941 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab47881d-31b3-45fa-bc72-fce64a00567c-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.365184 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.374537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.445277 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.445587 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" containerID="cri-o://208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" gracePeriod=10 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.653926 4820 generic.go:334] "Generic (PLEG): container finished" podID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerID="208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" exitCode=0 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.654016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662407 4820 generic.go:334] "Generic (PLEG): container finished" podID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" exitCode=0 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662444 4820 generic.go:334] "Generic (PLEG): container finished" podID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" exitCode=143 Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.662587 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab47881d-31b3-45fa-bc72-fce64a00567c","Type":"ContainerDied","Data":"bfa92a96c07e4a67af9cabb8bf252e7373e398045ca9af503079b824e85db397"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.663797 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.667624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8123556f-a4ef-4790-ba20-d4b536407aa4","Type":"ContainerDied","Data":"c6cf6173ff4cbbbdf79ca4f92812a53845b01aed980187b59b47e17fad3eb8ae"} Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.667697 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.706563 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.722924 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.748226 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.769737 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.779011 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.779064 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} err="failed to get container status \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.779102 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.779979 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780029 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} err="failed to get container status \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780056 4820 scope.go:117] "RemoveContainer" containerID="e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780597 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe"} err="failed to get container status \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": rpc error: code = NotFound desc = could not find container \"e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe\": container with ID starting with e7ff6181db30263c009c97008be5e8df7f1b194b3c731c864fc2d00d78753efe not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.780641 4820 scope.go:117] "RemoveContainer" containerID="6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.781028 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5"} err="failed to get container status \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": rpc error: code = NotFound desc = could not find container \"6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5\": container with ID starting with 6cd4d368971b586ce84672f290659c114fb7ea011f6db8ba93feb27b19d6b0f5 not found: ID does not exist" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.781053 4820 scope.go:117] "RemoveContainer" containerID="9e70968a86e176e47e2922eb14163f81af1cb43fac7d427b684a015f1317dec9" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.787088 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.801909 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.809914 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.810460 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810481 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: E0221 08:21:16.810511 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810702 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-log" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.810718 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" containerName="nova-metadata-metadata" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.811873 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.816595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.818343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.832347 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.833939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.839568 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.845000 4820 scope.go:117] "RemoveContainer" containerID="17e973db1a5b07340cbe98babca001d855617644f0fde707f9123e20e87ae051" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.845524 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.852622 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.882890 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 08:21:16 crc kubenswrapper[4820]: W0221 08:21:16.883923 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef3408d_c90c_48d8_85fa_366e68d6e66d.slice/crio-2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f WatchSource:0}: Error finding container 2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f: Status 404 returned error can't find the container with id 2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972839 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972940 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.972998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:16 crc kubenswrapper[4820]: I0221 08:21:16.973020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:16.983020 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074896 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.074986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.075031 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.075272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.079465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.079969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.080072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.088705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.088875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.092962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"nova-api-0\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.094146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"nova-metadata-0\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.155101 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.164949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.175949 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.176352 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") pod \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\" (UID: \"5b6b45ed-f167-4479-8f6c-f0e2aa72b046\") " Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.183255 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf" (OuterVolumeSpecName: "kube-api-access-c28bf") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "kube-api-access-c28bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.234885 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.239731 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config" (OuterVolumeSpecName: "config") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.247524 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.247827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b6b45ed-f167-4479-8f6c-f0e2aa72b046" (UID: "5b6b45ed-f167-4479-8f6c-f0e2aa72b046"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281615 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c28bf\" (UniqueName: \"kubernetes.io/projected/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-kube-api-access-c28bf\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281654 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281667 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281680 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.281691 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b6b45ed-f167-4479-8f6c-f0e2aa72b046-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.668756 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.680810 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerStarted","Data":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.681044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerStarted","Data":"2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.681446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.682206 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.691980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"0194164aec38f71af5721cfb64a867f87d1f6dc4cae02e011dfb17e92fdf75d8"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.697503 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.727058 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.7270344509999997 podStartE2EDuration="2.727034451s" podCreationTimestamp="2026-02-21 08:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:17.69592376 +0000 UTC m=+5652.729007958" watchObservedRunningTime="2026-02-21 08:21:17.727034451 +0000 UTC m=+5652.760118649" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.734359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8123556f-a4ef-4790-ba20-d4b536407aa4" path="/var/lib/kubelet/pods/8123556f-a4ef-4790-ba20-d4b536407aa4/volumes" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.734983 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab47881d-31b3-45fa-bc72-fce64a00567c" path="/var/lib/kubelet/pods/ab47881d-31b3-45fa-bc72-fce64a00567c/volumes" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.736218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf64f4875-cnv6v" event={"ID":"5b6b45ed-f167-4479-8f6c-f0e2aa72b046","Type":"ContainerDied","Data":"e9e0ecab29aed0ecb81b655dc50c26ef2c09f8bf912783336d03514cdc73e15c"} Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.736282 4820 scope.go:117] "RemoveContainer" containerID="208d3681faccb269d263339aeb15942d8136498788c9e7df32c0db9f8d79e526" Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.738580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.745798 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf64f4875-cnv6v"] Feb 21 08:21:17 crc kubenswrapper[4820]: I0221 08:21:17.758711 4820 scope.go:117] "RemoveContainer" containerID="700487684b5f87fbcc92aad3f9b93678a16e6a2aeaee18e715699139b2b75390" Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.707865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.709320 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.709435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerStarted","Data":"9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.710268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.710298 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerStarted","Data":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.728232 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.72821092 podStartE2EDuration="2.72821092s" podCreationTimestamp="2026-02-21 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:18.725851707 +0000 UTC m=+5653.758935905" watchObservedRunningTime="2026-02-21 08:21:18.72821092 +0000 UTC m=+5653.761295118" Feb 21 08:21:18 crc kubenswrapper[4820]: I0221 08:21:18.748767 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.748747186 podStartE2EDuration="2.748747186s" podCreationTimestamp="2026-02-21 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:18.746634038 +0000 UTC m=+5653.779718226" watchObservedRunningTime="2026-02-21 08:21:18.748747186 +0000 UTC m=+5653.781831384" Feb 21 08:21:19 crc kubenswrapper[4820]: I0221 08:21:19.707564 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" path="/var/lib/kubelet/pods/5b6b45ed-f167-4479-8f6c-f0e2aa72b046/volumes" Feb 21 08:21:22 crc kubenswrapper[4820]: I0221 08:21:22.165812 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:21:22 crc kubenswrapper[4820]: I0221 08:21:22.166195 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:21:26 crc kubenswrapper[4820]: I0221 08:21:26.393280 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.012631 4820 scope.go:117] "RemoveContainer" containerID="07d05dac62d0d1c533879d6419da2299dd9fef179fec90922352947180eea373" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.053142 4820 scope.go:117] "RemoveContainer" containerID="ad8c79ff3c8cfe106b6b55f544a31e4702e2207d0c03fa3122046a370bf5ac97" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.091633 4820 scope.go:117] "RemoveContainer" containerID="afe15da7c9744a1622ba946b0a8f2cad964248c6e6556d307d9afb8803cea6fb" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.156299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.156359 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.165199 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:21:27 crc kubenswrapper[4820]: I0221 08:21:27.165260 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.238915 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.261553 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.261875 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:28 crc kubenswrapper[4820]: I0221 08:21:28.262028 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:30 crc kubenswrapper[4820]: I0221 08:21:30.030649 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:21:30 crc kubenswrapper[4820]: I0221 08:21:30.040988 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cf89p"] Feb 21 08:21:31 crc kubenswrapper[4820]: I0221 08:21:31.714307 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85662cfe-6ca0-41d0-8858-4e63cd77f3c6" path="/var/lib/kubelet/pods/85662cfe-6ca0-41d0-8858-4e63cd77f3c6/volumes" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.237427 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.247398 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.247730 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:38 crc kubenswrapper[4820]: I0221 08:21:38.248002 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:40 crc kubenswrapper[4820]: I0221 08:21:40.981610 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082267 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.082499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") pod \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\" (UID: \"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f\") " Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.088395 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg" (OuterVolumeSpecName: "kube-api-access-vwsmg") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "kube-api-access-vwsmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.109421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data" (OuterVolumeSpecName: "config-data") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.112937 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" (UID: "b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185536 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185573 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwsmg\" (UniqueName: \"kubernetes.io/projected/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-kube-api-access-vwsmg\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.185588 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230345 4820 generic.go:334] "Generic (PLEG): container finished" podID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" exitCode=137 Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerDied","Data":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f","Type":"ContainerDied","Data":"99c4d061985b5004dc504e31adc8b10c206eef34bacb665b33bf678fab276fd0"} Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.230458 4820 scope.go:117] "RemoveContainer" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.251982 4820 scope.go:117] "RemoveContainer" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.252498 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": container with ID starting with fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755 not found: ID does not exist" containerID="fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.252555 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755"} err="failed to get container status \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": rpc error: code = NotFound desc = could not find container \"fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755\": container with ID starting with fe3b456e8ed0463415135c74d0f9e44f69ce36671c4bfb9b9f62361289b0e755 not found: ID does not exist" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.272120 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.285196 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.300700 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301151 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301196 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301203 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: E0221 08:21:41.301220 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="init" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="init" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301415 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6b45ed-f167-4479-8f6c-f0e2aa72b046" containerName="dnsmasq-dns" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.301426 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" containerName="nova-cell1-novncproxy-novncproxy" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.302064 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.305370 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.306102 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.306537 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.313583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.388678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.389081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.389174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.491573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.502422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.503133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.504124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.504151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.512967 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgbv\" (UniqueName: \"kubernetes.io/projected/fb1fd00e-e5fe-4977-91db-dc6b86e63e34-kube-api-access-9zgbv\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb1fd00e-e5fe-4977-91db-dc6b86e63e34\") " pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.624178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:41 crc kubenswrapper[4820]: I0221 08:21:41.756383 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f" path="/var/lib/kubelet/pods/b8e69ce3-80ce-4c2e-8f30-037ad1f0cc9f/volumes" Feb 21 08:21:42 crc kubenswrapper[4820]: I0221 08:21:42.068972 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 21 08:21:42 crc kubenswrapper[4820]: W0221 08:21:42.082957 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1fd00e_e5fe_4977_91db_dc6b86e63e34.slice/crio-ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36 WatchSource:0}: Error finding container ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36: Status 404 returned error can't find the container with id ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36 Feb 21 08:21:42 crc kubenswrapper[4820]: I0221 08:21:42.242279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb1fd00e-e5fe-4977-91db-dc6b86e63e34","Type":"ContainerStarted","Data":"ba1faec9c41cee6470e4a7b3c9f46aeb303f6b8cec0384e79e4213df23691c36"} Feb 21 08:21:43 crc kubenswrapper[4820]: I0221 08:21:43.251528 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb1fd00e-e5fe-4977-91db-dc6b86e63e34","Type":"ContainerStarted","Data":"b936b9bdaab59856d96caaba8479e7d2418e52a07686c7670c43622af2c41862"} Feb 21 08:21:43 crc kubenswrapper[4820]: I0221 08:21:43.266110 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.2660923840000002 podStartE2EDuration="2.266092384s" podCreationTimestamp="2026-02-21 08:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:43.264382818 +0000 UTC m=+5678.297467016" watchObservedRunningTime="2026-02-21 08:21:43.266092384 +0000 UTC m=+5678.299176582" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.269120 4820 generic.go:334] "Generic (PLEG): container finished" podID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerID="bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" exitCode=137 Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.269199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerDied","Data":"bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e"} Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.345155 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470651 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.470862 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") pod \"5a472f5c-b752-4dc8-84da-8a5801397ff8\" (UID: \"5a472f5c-b752-4dc8-84da-8a5801397ff8\") " Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.475619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs" (OuterVolumeSpecName: "kube-api-access-wh8qs") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "kube-api-access-wh8qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.495517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.496119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data" (OuterVolumeSpecName: "config-data") pod "5a472f5c-b752-4dc8-84da-8a5801397ff8" (UID: "5a472f5c-b752-4dc8-84da-8a5801397ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573887 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573933 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a472f5c-b752-4dc8-84da-8a5801397ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:45 crc kubenswrapper[4820]: I0221 08:21:45.573946 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh8qs\" (UniqueName: \"kubernetes.io/projected/5a472f5c-b752-4dc8-84da-8a5801397ff8-kube-api-access-wh8qs\") on node \"crc\" DevicePath \"\"" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5a472f5c-b752-4dc8-84da-8a5801397ff8","Type":"ContainerDied","Data":"5caa9a6ee200ac7417238c6c1cc223745c163ea2c319bd460f9791be7f091ca4"} Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283429 4820 scope.go:117] "RemoveContainer" containerID="bfdc5163d6fe6d6b59a7132c9f1e428154dc6cdcc364550f86f8dc3503c6792e" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.283627 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.316624 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.333071 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.345503 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: E0221 08:21:46.346232 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.346315 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.346650 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" containerName="nova-scheduler-scheduler" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.347919 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.352845 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.355656 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.489773 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.489930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.490035 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.592029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.592416 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.593194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.605053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.605127 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.607708 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"nova-scheduler-0\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " pod="openstack/nova-scheduler-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.624966 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:46 crc kubenswrapper[4820]: I0221 08:21:46.677490 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.101021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.155344 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.155385 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.294290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerStarted","Data":"11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb"} Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.295686 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerStarted","Data":"c17979fce8baa6d02445439110a5c3b9be8bcec098230906e260f5f9059c0387"} Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.318914 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.3188957700000001 podStartE2EDuration="1.31889577s" podCreationTimestamp="2026-02-21 08:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:47.308012136 +0000 UTC m=+5682.341096334" watchObservedRunningTime="2026-02-21 08:21:47.31889577 +0000 UTC m=+5682.351979968" Feb 21 08:21:47 crc kubenswrapper[4820]: I0221 08:21:47.710024 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a472f5c-b752-4dc8-84da-8a5801397ff8" path="/var/lib/kubelet/pods/5a472f5c-b752-4dc8-84da-8a5801397ff8/volumes" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.247477 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.247795 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.248419 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:48 crc kubenswrapper[4820]: I0221 08:21:48.248547 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.624472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.646605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:51 crc kubenswrapper[4820]: I0221 08:21:51.678444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.365177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.552190 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.554738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.560730 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.561671 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.563205 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.602983 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.603550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.706261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.711730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.715932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.717431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.722696 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"nova-cell1-cell-mapping-5x9p7\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:52 crc kubenswrapper[4820]: I0221 08:21:52.878442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:21:53 crc kubenswrapper[4820]: I0221 08:21:53.381388 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.361182 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerStarted","Data":"34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849"} Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.361957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerStarted","Data":"9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81"} Feb 21 08:21:54 crc kubenswrapper[4820]: I0221 08:21:54.377454 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5x9p7" podStartSLOduration=2.37740653 podStartE2EDuration="2.37740653s" podCreationTimestamp="2026-02-21 08:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:21:54.375822807 +0000 UTC m=+5689.408907025" watchObservedRunningTime="2026-02-21 08:21:54.37740653 +0000 UTC m=+5689.410490748" Feb 21 08:21:56 crc kubenswrapper[4820]: I0221 08:21:56.677985 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 08:21:56 crc kubenswrapper[4820]: I0221 08:21:56.713445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 08:21:57 crc kubenswrapper[4820]: I0221 08:21:57.422877 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.237514 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.245629 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.245934 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:58 crc kubenswrapper[4820]: I0221 08:21:58.246087 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.91:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:21:59 crc kubenswrapper[4820]: I0221 08:21:59.408137 4820 generic.go:334] "Generic (PLEG): container finished" podID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerID="34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849" exitCode=0 Feb 21 08:21:59 crc kubenswrapper[4820]: I0221 08:21:59.408225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerDied","Data":"34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849"} Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.681832 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775896 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.775963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") pod \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\" (UID: \"f525d5cb-a9d6-4121-bf15-1e7af7974e4f\") " Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.780528 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts" (OuterVolumeSpecName: "scripts") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.781442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt" (OuterVolumeSpecName: "kube-api-access-7tdlt") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "kube-api-access-7tdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.806359 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data" (OuterVolumeSpecName: "config-data") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.809144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f525d5cb-a9d6-4121-bf15-1e7af7974e4f" (UID: "f525d5cb-a9d6-4121-bf15-1e7af7974e4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878086 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878128 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878143 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tdlt\" (UniqueName: \"kubernetes.io/projected/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-kube-api-access-7tdlt\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:00 crc kubenswrapper[4820]: I0221 08:22:00.878154 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f525d5cb-a9d6-4121-bf15-1e7af7974e4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5x9p7" event={"ID":"f525d5cb-a9d6-4121-bf15-1e7af7974e4f","Type":"ContainerDied","Data":"9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81"} Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426066 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9515ab2db7e805d98918309a27175f89bf66262d112cd9db60d137e174678e81" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.426096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5x9p7" Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600450 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" containerID="cri-o://2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.600495 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" containerID="cri-o://eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.610601 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.611022 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" containerID="cri-o://11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662621 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662857 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" containerID="cri-o://8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: I0221 08:22:01.662902 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" containerID="cri-o://ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" gracePeriod=30 Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.681555 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.687800 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.690177 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:01 crc kubenswrapper[4820]: E0221 08:22:01.690265 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.435742 4820 generic.go:334] "Generic (PLEG): container finished" podID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerID="8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" exitCode=143 Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.435813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0"} Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.437969 4820 generic.go:334] "Generic (PLEG): container finished" podID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" exitCode=143 Feb 21 08:22:02 crc kubenswrapper[4820]: I0221 08:22:02.437995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.680267 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.682773 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.684210 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:06 crc kubenswrapper[4820]: E0221 08:22:06.684266 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.679682 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.681263 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.682484 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:11 crc kubenswrapper[4820]: E0221 08:22:11.682525 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.449371 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545516 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") pod \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\" (UID: \"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545855 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs" (OuterVolumeSpecName: "logs") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.545957 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558532 4820 generic.go:334] "Generic (PLEG): container finished" podID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerID="ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" exitCode=0 Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558683 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96f965dc-1e6a-477d-84d7-1c6a0c66d940","Type":"ContainerDied","Data":"9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.558696 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9adb5f8a0edceaa45f951fcc6419526dc2350dde2023d805cc2da22c4fe36495" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.571451 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2" (OuterVolumeSpecName: "kube-api-access-qz6c2") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "kube-api-access-qz6c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.580396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data" (OuterVolumeSpecName: "config-data") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582447 4820 generic.go:334] "Generic (PLEG): container finished" podID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" exitCode=0 Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582491 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91e3b60f-8777-4dcf-90f1-25c22e3fa2f9","Type":"ContainerDied","Data":"0194164aec38f71af5721cfb64a867f87d1f6dc4cae02e011dfb17e92fdf75d8"} Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582544 4820 scope.go:117] "RemoveContainer" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.582706 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.602006 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.621459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" (UID: "91e3b60f-8777-4dcf-90f1-25c22e3fa2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.625875 4820 scope.go:117] "RemoveContainer" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647336 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647390 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") pod \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\" (UID: \"96f965dc-1e6a-477d-84d7-1c6a0c66d940\") " Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647956 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647977 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.647989 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz6c2\" (UniqueName: \"kubernetes.io/projected/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9-kube-api-access-qz6c2\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.656053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs" (OuterVolumeSpecName: "logs") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.661444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6" (OuterVolumeSpecName: "kube-api-access-w5mb6") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "kube-api-access-w5mb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.711443 4820 scope.go:117] "RemoveContainer" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.717364 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": container with ID starting with eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636 not found: ID does not exist" containerID="eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.729435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data" (OuterVolumeSpecName: "config-data") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.722502 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636"} err="failed to get container status \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": rpc error: code = NotFound desc = could not find container \"eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636\": container with ID starting with eaef366fb677fd68a580b4761716e530409128edf314fc38a315faad9448a636 not found: ID does not exist" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.729953 4820 scope.go:117] "RemoveContainer" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.731157 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": container with ID starting with 2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8 not found: ID does not exist" containerID="2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.731194 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8"} err="failed to get container status \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": rpc error: code = NotFound desc = could not find container \"2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8\": container with ID starting with 2bfc111cd40740fe3b53eac0d42a091a3667d11133c583199b93c1db4eb5c1f8 not found: ID does not exist" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.739641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765757 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mb6\" (UniqueName: \"kubernetes.io/projected/96f965dc-1e6a-477d-84d7-1c6a0c66d940-kube-api-access-w5mb6\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765788 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765830 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.765841 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96f965dc-1e6a-477d-84d7-1c6a0c66d940-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.807566 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "96f965dc-1e6a-477d-84d7-1c6a0c66d940" (UID: "96f965dc-1e6a-477d-84d7-1c6a0c66d940"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.872763 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96f965dc-1e6a-477d-84d7-1c6a0c66d940-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.908542 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.920509 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.936962 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936980 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.936990 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.936997 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937013 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937019 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937043 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937050 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: E0221 08:22:15.937059 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937065 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937226 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-api" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937254 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" containerName="nova-api-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937265 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-metadata" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937277 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" containerName="nova-metadata-log" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.937285 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" containerName="nova-manage" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.938283 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.943124 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:22:15 crc kubenswrapper[4820]: I0221 08:22:15.947973 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.075523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077156 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.077637 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179703 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.179919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.180358 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.183447 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.185527 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.195648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"nova-api-0\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.255193 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.591896 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.621039 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.629415 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.646533 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.649091 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.651832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.652565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.657770 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.679832 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.681226 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.683677 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:16 crc kubenswrapper[4820]: E0221 08:22:16.683779 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.707295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.790862 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.892420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.892914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893040 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893500 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.893572 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.896688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.897120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.897660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.916800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"nova-metadata-0\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " pod="openstack/nova-metadata-0" Feb 21 08:22:16 crc kubenswrapper[4820]: I0221 08:22:16.973051 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.424351 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 08:22:17 crc kubenswrapper[4820]: W0221 08:22:17.433568 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a5efcf2_dfdc_4c49_85f1_ccbd24edaebf.slice/crio-693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0 WatchSource:0}: Error finding container 693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0: Status 404 returned error can't find the container with id 693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0 Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.602717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.603617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.603717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerStarted","Data":"b77e775ea781ef6a5ae70de88c34a44655e28ba0dca43107d59484ae85245930"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.606285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0"} Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.622498 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.622474816 podStartE2EDuration="2.622474816s" podCreationTimestamp="2026-02-21 08:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:17.617880872 +0000 UTC m=+5712.650965070" watchObservedRunningTime="2026-02-21 08:22:17.622474816 +0000 UTC m=+5712.655559014" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.718083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e3b60f-8777-4dcf-90f1-25c22e3fa2f9" path="/var/lib/kubelet/pods/91e3b60f-8777-4dcf-90f1-25c22e3fa2f9/volumes" Feb 21 08:22:17 crc kubenswrapper[4820]: I0221 08:22:17.719364 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f965dc-1e6a-477d-84d7-1c6a0c66d940" path="/var/lib/kubelet/pods/96f965dc-1e6a-477d-84d7-1c6a0c66d940/volumes" Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.616489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd"} Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.616529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerStarted","Data":"f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261"} Feb 21 08:22:18 crc kubenswrapper[4820]: I0221 08:22:18.652573 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.652551007 podStartE2EDuration="2.652551007s" podCreationTimestamp="2026-02-21 08:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:18.646561925 +0000 UTC m=+5713.679646123" watchObservedRunningTime="2026-02-21 08:22:18.652551007 +0000 UTC m=+5713.685635205" Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.680365 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.682029 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.683353 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:21 crc kubenswrapper[4820]: E0221 08:22:21.683389 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:21 crc kubenswrapper[4820]: I0221 08:22:21.973438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:22:21 crc kubenswrapper[4820]: I0221 08:22:21.973513 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.255867 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.257375 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.679957 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.681772 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.683037 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:26 crc kubenswrapper[4820]: E0221 08:22:26.683076 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.973129 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:22:26 crc kubenswrapper[4820]: I0221 08:22:26.973486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.230289 4820 scope.go:117] "RemoveContainer" containerID="08029266fdbaec4768281dce6906fb8acc0183782e2aefac3bdb5346ddaafd3d" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.299630 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.340604 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.95:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.985407 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:27 crc kubenswrapper[4820]: I0221 08:22:27.985438 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.96:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.678165 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679021 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679414 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 08:22:31 crc kubenswrapper[4820]: E0221 08:22:31.679460 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:31 crc kubenswrapper[4820]: I0221 08:22:31.719838 4820 generic.go:334] "Generic (PLEG): container finished" podID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" exitCode=137 Feb 21 08:22:31 crc kubenswrapper[4820]: I0221 08:22:31.719884 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerDied","Data":"11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb"} Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.697579 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732000 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"364f6af1-6c1b-4156-bc9b-de0229e0a315","Type":"ContainerDied","Data":"c17979fce8baa6d02445439110a5c3b9be8bcec098230906e260f5f9059c0387"} Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732364 4820 scope.go:117] "RemoveContainer" containerID="11a3861b2192c919c27953b061f51b3309548f333e8775046f9b82549b205efb" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.732536 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.795957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.796258 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.796286 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") pod \"364f6af1-6c1b-4156-bc9b-de0229e0a315\" (UID: \"364f6af1-6c1b-4156-bc9b-de0229e0a315\") " Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.819588 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq" (OuterVolumeSpecName: "kube-api-access-sc9qq") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "kube-api-access-sc9qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.823512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.835093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data" (OuterVolumeSpecName: "config-data") pod "364f6af1-6c1b-4156-bc9b-de0229e0a315" (UID: "364f6af1-6c1b-4156-bc9b-de0229e0a315"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898328 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898373 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc9qq\" (UniqueName: \"kubernetes.io/projected/364f6af1-6c1b-4156-bc9b-de0229e0a315-kube-api-access-sc9qq\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:32 crc kubenswrapper[4820]: I0221 08:22:32.898389 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364f6af1-6c1b-4156-bc9b-de0229e0a315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.074404 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.087545 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.103913 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: E0221 08:22:33.104394 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.104416 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.104655 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" containerName="nova-scheduler-scheduler" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.105528 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.108811 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.112023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.205771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.308360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.313431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.316996 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.328029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"nova-scheduler-0\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.431050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.708744 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364f6af1-6c1b-4156-bc9b-de0229e0a315" path="/var/lib/kubelet/pods/364f6af1-6c1b-4156-bc9b-de0229e0a315/volumes" Feb 21 08:22:33 crc kubenswrapper[4820]: W0221 08:22:33.887300 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475239fa_3785_4704_bef1_f554cf694456.slice/crio-b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62 WatchSource:0}: Error finding container b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62: Status 404 returned error can't find the container with id b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62 Feb 21 08:22:33 crc kubenswrapper[4820]: I0221 08:22:33.889850 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.751062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerStarted","Data":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.751525 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerStarted","Data":"b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62"} Feb 21 08:22:34 crc kubenswrapper[4820]: I0221 08:22:34.779610 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.779591522 podStartE2EDuration="1.779591522s" podCreationTimestamp="2026-02-21 08:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:34.777403302 +0000 UTC m=+5729.810487530" watchObservedRunningTime="2026-02-21 08:22:34.779591522 +0000 UTC m=+5729.812675720" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.260271 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.262097 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.263439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.264876 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.765544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.769262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.950665 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.952561 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979040 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979185 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.979209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:36 crc kubenswrapper[4820]: I0221 08:22:36.984444 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.034483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.036052 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.054800 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085276 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.085320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.086373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.087580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.091939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.101661 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.122555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"dnsmasq-dns-64b58db4ff-kq4r6\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.279552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.859737 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 08:22:37 crc kubenswrapper[4820]: I0221 08:22:37.952288 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.431291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785510 4820 generic.go:334] "Generic (PLEG): container finished" podID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerID="f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb" exitCode=0 Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb"} Feb 21 08:22:38 crc kubenswrapper[4820]: I0221 08:22:38.785953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerStarted","Data":"d67f845d3717911b1815a01ec1fd7dc0df11dc2b02acfc8a168dc3d28d255825"} Feb 21 08:22:39 crc kubenswrapper[4820]: I0221 08:22:39.798717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerStarted","Data":"0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8"} Feb 21 08:22:39 crc kubenswrapper[4820]: I0221 08:22:39.799875 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.325306 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podStartSLOduration=4.325290191 podStartE2EDuration="4.325290191s" podCreationTimestamp="2026-02-21 08:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:39.818641033 +0000 UTC m=+5734.851725241" watchObservedRunningTime="2026-02-21 08:22:40.325290191 +0000 UTC m=+5735.358374389" Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.334911 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.339342 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" containerID="cri-o://8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" gracePeriod=30 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.339380 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" containerID="cri-o://07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" gracePeriod=30 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.812960 4820 generic.go:334] "Generic (PLEG): container finished" podID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerID="8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" exitCode=143 Feb 21 08:22:40 crc kubenswrapper[4820]: I0221 08:22:40.813024 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0"} Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.432065 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.459028 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.816488 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.816543 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.863352 4820 generic.go:334] "Generic (PLEG): container finished" podID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerID="07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" exitCode=0 Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.868307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0"} Feb 21 08:22:43 crc kubenswrapper[4820]: I0221 08:22:43.906058 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.156728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324307 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.324347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") pod \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\" (UID: \"a64e0ba1-9522-4546-b79e-1ac9cb43f135\") " Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.332586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs" (OuterVolumeSpecName: "logs") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.333995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj" (OuterVolumeSpecName: "kube-api-access-tglwj") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "kube-api-access-tglwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.356829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data" (OuterVolumeSpecName: "config-data") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.396363 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64e0ba1-9522-4546-b79e-1ac9cb43f135" (UID: "a64e0ba1-9522-4546-b79e-1ac9cb43f135"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426764 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tglwj\" (UniqueName: \"kubernetes.io/projected/a64e0ba1-9522-4546-b79e-1ac9cb43f135-kube-api-access-tglwj\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426796 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426806 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a64e0ba1-9522-4546-b79e-1ac9cb43f135-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.426815 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64e0ba1-9522-4546-b79e-1ac9cb43f135-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875015 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a64e0ba1-9522-4546-b79e-1ac9cb43f135","Type":"ContainerDied","Data":"b77e775ea781ef6a5ae70de88c34a44655e28ba0dca43107d59484ae85245930"} Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.875431 4820 scope.go:117] "RemoveContainer" containerID="07846b9b8a0af02e22835278a21ee54b1ed2eb3d88333e73ddd8b9f9ca50f1d0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.901484 4820 scope.go:117] "RemoveContainer" containerID="8c20503b1f1242e7aa9b4faff059a037b8e4b39bc2ef0ade33567969965f1be0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.906086 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.915971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.930634 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:44 crc kubenswrapper[4820]: E0221 08:22:44.931383 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931505 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: E0221 08:22:44.931612 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.931980 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-api" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.932086 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" containerName="nova-api-log" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.933939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939326 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.939660 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 08:22:44 crc kubenswrapper[4820]: I0221 08:22:44.952084 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.036920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.036991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037113 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037139 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.037210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139066 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139151 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139455 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.139742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.146907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.147002 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.150102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.150823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.171269 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"nova-api-0\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.256676 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.707077 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64e0ba1-9522-4546-b79e-1ac9cb43f135" path="/var/lib/kubelet/pods/a64e0ba1-9522-4546-b79e-1ac9cb43f135/volumes" Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.723332 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 08:22:45 crc kubenswrapper[4820]: I0221 08:22:45.882563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"f84f25836fa8a5c0573e20405d3a79bd27bbd629ad136467d54a559c6258e788"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.894886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.895230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerStarted","Data":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} Feb 21 08:22:46 crc kubenswrapper[4820]: I0221 08:22:46.920636 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.920616719 podStartE2EDuration="2.920616719s" podCreationTimestamp="2026-02-21 08:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:22:46.912817468 +0000 UTC m=+5741.945901666" watchObservedRunningTime="2026-02-21 08:22:46.920616719 +0000 UTC m=+5741.953700917" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.281412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.332275 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.332478 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" containerID="cri-o://51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" gracePeriod=10 Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.874056 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904435 4820 generic.go:334] "Generic (PLEG): container finished" podID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" exitCode=0 Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904539 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d6fd5f6f-tvl89" event={"ID":"543eb7a9-5b1a-407b-a035-86d3fb8bd55c","Type":"ContainerDied","Data":"5a46ef286aad0cc12fe47e877ef7c7e453f348a471ce2d591279fe8b81e97e5d"} Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.904579 4820 scope.go:117] "RemoveContainer" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.933478 4820 scope.go:117] "RemoveContainer" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.955674 4820 scope.go:117] "RemoveContainer" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: E0221 08:22:47.961526 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": container with ID starting with 51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505 not found: ID does not exist" containerID="51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.961785 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505"} err="failed to get container status \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": rpc error: code = NotFound desc = could not find container \"51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505\": container with ID starting with 51991b16244d9f4058ea92955341051d76f0e3d461edacd951cf99c3e17cf505 not found: ID does not exist" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.961815 4820 scope.go:117] "RemoveContainer" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: E0221 08:22:47.962425 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": container with ID starting with 48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e not found: ID does not exist" containerID="48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.962483 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e"} err="failed to get container status \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": rpc error: code = NotFound desc = could not find container \"48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e\": container with ID starting with 48dfb34763886242db6f2025a2253b362714ff74f0cff293405cc568e1fb6c7e not found: ID does not exist" Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996721 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996796 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996823 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:47 crc kubenswrapper[4820]: I0221 08:22:47.996853 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") pod \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\" (UID: \"543eb7a9-5b1a-407b-a035-86d3fb8bd55c\") " Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.003508 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45" (OuterVolumeSpecName: "kube-api-access-s4q45") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "kube-api-access-s4q45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.041646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.052798 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.066075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config" (OuterVolumeSpecName: "config") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.067854 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "543eb7a9-5b1a-407b-a035-86d3fb8bd55c" (UID: "543eb7a9-5b1a-407b-a035-86d3fb8bd55c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098642 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098672 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098685 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4q45\" (UniqueName: \"kubernetes.io/projected/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-kube-api-access-s4q45\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098699 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.098710 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/543eb7a9-5b1a-407b-a035-86d3fb8bd55c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.237281 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:48 crc kubenswrapper[4820]: I0221 08:22:48.245218 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65d6fd5f6f-tvl89"] Feb 21 08:22:49 crc kubenswrapper[4820]: I0221 08:22:49.707338 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" path="/var/lib/kubelet/pods/543eb7a9-5b1a-407b-a035-86d3fb8bd55c/volumes" Feb 21 08:22:55 crc kubenswrapper[4820]: I0221 08:22:55.257335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:55 crc kubenswrapper[4820]: I0221 08:22:55.257630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 08:22:56 crc kubenswrapper[4820]: I0221 08:22:56.271443 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:22:56 crc kubenswrapper[4820]: I0221 08:22:56.271460 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.266317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.267213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.268024 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 08:23:05 crc kubenswrapper[4820]: I0221 08:23:05.274508 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:23:06 crc kubenswrapper[4820]: I0221 08:23:06.054159 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 08:23:06 crc kubenswrapper[4820]: I0221 08:23:06.064248 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 08:23:13 crc kubenswrapper[4820]: I0221 08:23:13.816206 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:23:13 crc kubenswrapper[4820]: I0221 08:23:13.816970 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.245311 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:19 crc kubenswrapper[4820]: E0221 08:23:19.246767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="init" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.246790 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="init" Feb 21 08:23:19 crc kubenswrapper[4820]: E0221 08:23:19.246815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.246822 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.247157 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="543eb7a9-5b1a-407b-a035-86d3fb8bd55c" containerName="dnsmasq-dns" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.248808 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.262821 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263081 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n22hz" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.263380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.324816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.345435 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.345832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.346014 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.346206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.359623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.420101 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.421754 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" containerID="cri-o://3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.422234 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" containerID="cri-o://9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.432137 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.434638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444352 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444629 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" containerID="cri-o://fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.444804 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" containerID="cri-o://8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" gracePeriod=30 Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.448849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.449419 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.449806 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.482648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.485096 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.493602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"horizon-6f9bd7b79c-txbkn\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552539 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552580 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.552652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.597191 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.653987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.654062 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.654099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.656657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.657095 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.657683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.658157 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.674894 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"horizon-66bd57fd8f-854qq\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:19 crc kubenswrapper[4820]: I0221 08:23:19.770108 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.079066 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.079581 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.185114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"4eb3448883c497758beeea4960713d6d2bc637bf465fa9c8ccfeb69d503fe899"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.187581 4820 generic.go:334] "Generic (PLEG): container finished" podID="57f780e9-b685-4b5b-bab3-63b31b794393" containerID="3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" exitCode=143 Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.187646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.190089 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerID="fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" exitCode=143 Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.190179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f"} Feb 21 08:23:20 crc kubenswrapper[4820]: I0221 08:23:20.262326 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.058707 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.092229 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.093745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.101064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.111860 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.171933 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.188824 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.202229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"ca15dd4d00424e10b16c9315ace884ed75b2fbad9a42602661e866daf6703ced"} Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.224301 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.226293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.233165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.291459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.291557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292520 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.292993 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293143 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.293778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.294541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.299169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.299170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.313691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.318255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"horizon-d844c64f6-dltxp\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394919 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.394970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395030 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395763 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.395950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.396483 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.399830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.400494 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.401547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.412319 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"horizon-547899c658-2788v\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.436746 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.555760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:21 crc kubenswrapper[4820]: I0221 08:23:21.912836 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.055340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.212559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"b8979ed7b663edbb899b5b453daac362045e3fab6583881f796d8f5da1b726a5"} Feb 21 08:23:22 crc kubenswrapper[4820]: I0221 08:23:22.213975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"eff2d04aa677852d296ff8fc2a98555932014b77b70e9d62fecd2afd6b553dbd"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228348 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerID="8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" exitCode=0 Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b012ae7-d786-413d-82ca-88448b64b4cd","Type":"ContainerDied","Data":"4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.228969 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4481d03455dfe7bfd51fa7956acf3da2923a2d64faccca3c25e6e25bb77ec5a9" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232669 4820 generic.go:334] "Generic (PLEG): container finished" podID="57f780e9-b685-4b5b-bab3-63b31b794393" containerID="9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" exitCode=0 Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232740 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57f780e9-b685-4b5b-bab3-63b31b794393","Type":"ContainerDied","Data":"6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1"} Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.232759 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.233805 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.241819 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.335716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336180 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336318 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336384 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336430 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336481 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336551 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs" (OuterVolumeSpecName: "logs") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") pod \"57f780e9-b685-4b5b-bab3-63b31b794393\" (UID: \"57f780e9-b685-4b5b-bab3-63b31b794393\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") pod \"4b012ae7-d786-413d-82ca-88448b64b4cd\" (UID: \"4b012ae7-d786-413d-82ca-88448b64b4cd\") " Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.336788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.337278 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.337301 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57f780e9-b685-4b5b-bab3-63b31b794393-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.339384 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.339453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs" (OuterVolumeSpecName: "logs") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.342450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z" (OuterVolumeSpecName: "kube-api-access-4tw8z") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "kube-api-access-4tw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.342456 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts" (OuterVolumeSpecName: "scripts") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.343397 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts" (OuterVolumeSpecName: "scripts") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.348673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc" (OuterVolumeSpecName: "kube-api-access-dnczc") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "kube-api-access-dnczc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.371938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.389009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.417470 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.424703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data" (OuterVolumeSpecName: "config-data") pod "4b012ae7-d786-413d-82ca-88448b64b4cd" (UID: "4b012ae7-d786-413d-82ca-88448b64b4cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.425511 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data" (OuterVolumeSpecName: "config-data") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.431411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "57f780e9-b685-4b5b-bab3-63b31b794393" (UID: "57f780e9-b685-4b5b-bab3-63b31b794393"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438638 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438673 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438685 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b012ae7-d786-413d-82ca-88448b64b4cd-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438694 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438703 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438711 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438720 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnczc\" (UniqueName: \"kubernetes.io/projected/4b012ae7-d786-413d-82ca-88448b64b4cd-kube-api-access-dnczc\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438729 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438737 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f780e9-b685-4b5b-bab3-63b31b794393-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438745 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438753 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tw8z\" (UniqueName: \"kubernetes.io/projected/57f780e9-b685-4b5b-bab3-63b31b794393-kube-api-access-4tw8z\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: I0221 08:23:23.438761 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b012ae7-d786-413d-82ca-88448b64b4cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:23:23 crc kubenswrapper[4820]: E0221 08:23:23.863701 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f780e9_b685_4b5b_bab3_63b31b794393.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f780e9_b685_4b5b_bab3_63b31b794393.slice/crio-6cf72bcbf2a073ab72014714c13787a4273dbe3561b7424b9118c55987b585a1\": RecentStats: unable to find data in memory cache]" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.242511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.242511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.281146 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.310612 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.323832 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.339522 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.351551 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352068 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352081 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352103 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352109 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352132 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352138 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: E0221 08:23:24.352146 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352324 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352364 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352376 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-httpd" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.352397 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" containerName="glance-log" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.353484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.357130 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.361095 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.361973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mrcwm" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.362212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.365050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.366769 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.370048 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.370412 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.378394 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.387536 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.467950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468745 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.468982 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.573945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574543 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.574837 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.576626 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.577356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.584036 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.584357 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-logs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.587583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.591581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.591738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.596838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.601073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b461284-e512-4b62-95ae-fc82b119c340-logs\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.601568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.605220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b461284-e512-4b62-95ae-fc82b119c340-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.607107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.608791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74254\" (UniqueName: \"kubernetes.io/projected/8b461284-e512-4b62-95ae-fc82b119c340-kube-api-access-74254\") pod \"glance-default-internal-api-0\" (UID: \"8b461284-e512-4b62-95ae-fc82b119c340\") " pod="openstack/glance-default-internal-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.615824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m6p\" (UniqueName: \"kubernetes.io/projected/4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c-kube-api-access-l9m6p\") pod \"glance-default-external-api-0\" (UID: \"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c\") " pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.673116 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 21 08:23:24 crc kubenswrapper[4820]: I0221 08:23:24.697895 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:25 crc kubenswrapper[4820]: I0221 08:23:25.721368 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b012ae7-d786-413d-82ca-88448b64b4cd" path="/var/lib/kubelet/pods/4b012ae7-d786-413d-82ca-88448b64b4cd/volumes" Feb 21 08:23:25 crc kubenswrapper[4820]: I0221 08:23:25.728380 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f780e9-b685-4b5b-bab3-63b31b794393" path="/var/lib/kubelet/pods/57f780e9-b685-4b5b-bab3-63b31b794393/volumes" Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.195141 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.285864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"00ddbb706a21079bab997f8ef05130c3237c497c557a3f1f02ecb26f05fadb8f"} Feb 21 08:23:29 crc kubenswrapper[4820]: W0221 08:23:29.289886 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b461284_e512_4b62_95ae_fc82b119c340.slice/crio-2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8 WatchSource:0}: Error finding container 2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8: Status 404 returned error can't find the container with id 2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8 Feb 21 08:23:29 crc kubenswrapper[4820]: I0221 08:23:29.290883 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.299269 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.305551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"2c472a320bafe05a2571a6d67326e541c78aef63d239415061079112cab58bd8"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.318678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4"} Feb 21 08:23:30 crc kubenswrapper[4820]: I0221 08:23:30.321614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"50f4060c16dc9082b56c390f4dcec7673f57073afe563f4603d30d3bf17025e3"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.334843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.336442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerStarted","Data":"7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.336606 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66bd57fd8f-854qq" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" containerID="cri-o://51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" gracePeriod=30 Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.337214 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66bd57fd8f-854qq" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" containerID="cri-o://7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" gracePeriod=30 Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.340939 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c","Type":"ContainerStarted","Data":"f2e4f3d7795d49d65b123edda21972d6494394de2f38ab6da002d16ef4ad13ff"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.344052 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerStarted","Data":"b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.346101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"be887c463e233940a75b1c3b78d93de83c1fc45b946d6370c0a35684bae704d0"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.347751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.347801 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerStarted","Data":"31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a"} Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.369797 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66bd57fd8f-854qq" podStartSLOduration=3.499159319 podStartE2EDuration="12.369778138s" podCreationTimestamp="2026-02-21 08:23:19 +0000 UTC" firstStartedPulling="2026-02-21 08:23:20.27128258 +0000 UTC m=+5775.304366778" lastFinishedPulling="2026-02-21 08:23:29.141901399 +0000 UTC m=+5784.174985597" observedRunningTime="2026-02-21 08:23:31.360771695 +0000 UTC m=+5786.393855893" watchObservedRunningTime="2026-02-21 08:23:31.369778138 +0000 UTC m=+5786.402862336" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.396946 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-547899c658-2788v" podStartSLOduration=2.563259429 podStartE2EDuration="10.396921583s" podCreationTimestamp="2026-02-21 08:23:21 +0000 UTC" firstStartedPulling="2026-02-21 08:23:22.058832434 +0000 UTC m=+5777.091916632" lastFinishedPulling="2026-02-21 08:23:29.892494588 +0000 UTC m=+5784.925578786" observedRunningTime="2026-02-21 08:23:31.383933942 +0000 UTC m=+5786.417018170" watchObservedRunningTime="2026-02-21 08:23:31.396921583 +0000 UTC m=+5786.430005781" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.411707 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.411681162 podStartE2EDuration="7.411681162s" podCreationTimestamp="2026-02-21 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:23:31.400322865 +0000 UTC m=+5786.433407063" watchObservedRunningTime="2026-02-21 08:23:31.411681162 +0000 UTC m=+5786.444765350" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438127 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d844c64f6-dltxp" podStartSLOduration=3.210151731 podStartE2EDuration="10.438105587s" podCreationTimestamp="2026-02-21 08:23:21 +0000 UTC" firstStartedPulling="2026-02-21 08:23:21.92562327 +0000 UTC m=+5776.958707458" lastFinishedPulling="2026-02-21 08:23:29.153577116 +0000 UTC m=+5784.186661314" observedRunningTime="2026-02-21 08:23:31.428992871 +0000 UTC m=+5786.462077079" watchObservedRunningTime="2026-02-21 08:23:31.438105587 +0000 UTC m=+5786.471189785" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438446 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.438496 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.556909 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:31 crc kubenswrapper[4820]: I0221 08:23:31.556958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerStarted","Data":"6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585"} Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365362 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f9bd7b79c-txbkn" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" containerID="cri-o://02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" gracePeriod=30 Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.365615 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f9bd7b79c-txbkn" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" containerID="cri-o://6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" gracePeriod=30 Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.372917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8b461284-e512-4b62-95ae-fc82b119c340","Type":"ContainerStarted","Data":"52fff291f2410225f8ebc1a06a3015dd791eec0f4b8ef8a5cf64bdab4897b97c"} Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.447781 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f9bd7b79c-txbkn" podStartSLOduration=2.671937066 podStartE2EDuration="13.447762975s" podCreationTimestamp="2026-02-21 08:23:19 +0000 UTC" firstStartedPulling="2026-02-21 08:23:20.078858593 +0000 UTC m=+5775.111942791" lastFinishedPulling="2026-02-21 08:23:30.854684502 +0000 UTC m=+5785.887768700" observedRunningTime="2026-02-21 08:23:32.439315937 +0000 UTC m=+5787.472400145" watchObservedRunningTime="2026-02-21 08:23:32.447762975 +0000 UTC m=+5787.480847173" Feb 21 08:23:32 crc kubenswrapper[4820]: I0221 08:23:32.486273 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.486254207 podStartE2EDuration="8.486254207s" podCreationTimestamp="2026-02-21 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:23:32.481529259 +0000 UTC m=+5787.514613457" watchObservedRunningTime="2026-02-21 08:23:32.486254207 +0000 UTC m=+5787.519338405" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.673883 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.673935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.698758 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.699120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.709809 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.737172 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.738950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:34 crc kubenswrapper[4820]: I0221 08:23:34.747177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399825 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:35 crc kubenswrapper[4820]: I0221 08:23:35.399839 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.508313 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.589835 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.590117 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 21 08:23:37 crc kubenswrapper[4820]: I0221 08:23:37.591181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 21 08:23:38 crc kubenswrapper[4820]: I0221 08:23:38.352689 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 21 08:23:39 crc kubenswrapper[4820]: I0221 08:23:39.598738 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:23:39 crc kubenswrapper[4820]: I0221 08:23:39.770901 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:23:41 crc kubenswrapper[4820]: I0221 08:23:41.438596 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:23:41 crc kubenswrapper[4820]: I0221 08:23:41.558834 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816538 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816831 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.816884 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.817683 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:23:43 crc kubenswrapper[4820]: I0221 08:23:43.817748 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" gracePeriod=600 Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.501753 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" exitCode=0 Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.501835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6"} Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.502148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} Feb 21 08:23:44 crc kubenswrapper[4820]: I0221 08:23:44.502172 4820 scope.go:117] "RemoveContainer" containerID="6de360d8968b78fd10cb209fc3ebcde7e7bd9d4caaf7c5ca95d11055ce270d3a" Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.059875 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.071531 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w88hx"] Feb 21 08:23:45 crc kubenswrapper[4820]: I0221 08:23:45.707716 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fea2a27-a57a-4827-8e17-5d19ef7bba28" path="/var/lib/kubelet/pods/0fea2a27-a57a-4827-8e17-5d19ef7bba28/volumes" Feb 21 08:23:46 crc kubenswrapper[4820]: I0221 08:23:46.030755 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:23:46 crc kubenswrapper[4820]: I0221 08:23:46.040133 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5a31-account-create-update-p74qt"] Feb 21 08:23:47 crc kubenswrapper[4820]: I0221 08:23:47.709959 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a61ba7-b697-4b33-8ed3-9dda50a2c415" path="/var/lib/kubelet/pods/d4a61ba7-b697-4b33-8ed3-9dda50a2c415/volumes" Feb 21 08:23:53 crc kubenswrapper[4820]: I0221 08:23:53.386621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:53 crc kubenswrapper[4820]: I0221 08:23:53.547612 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.168043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.298855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-547899c658-2788v" Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.380923 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.595056 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" containerID="cri-o://6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" gracePeriod=30 Feb 21 08:23:55 crc kubenswrapper[4820]: I0221 08:23:55.595096 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" containerID="cri-o://b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" gracePeriod=30 Feb 21 08:23:56 crc kubenswrapper[4820]: I0221 08:23:56.039760 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:23:56 crc kubenswrapper[4820]: I0221 08:23:56.051146 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kncz4"] Feb 21 08:23:57 crc kubenswrapper[4820]: I0221 08:23:57.707595 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2285cbc5-545d-463d-ae4a-350c3fd26323" path="/var/lib/kubelet/pods/2285cbc5-545d-463d-ae4a-350c3fd26323/volumes" Feb 21 08:23:59 crc kubenswrapper[4820]: I0221 08:23:59.628901 4820 generic.go:334] "Generic (PLEG): container finished" podID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerID="b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" exitCode=0 Feb 21 08:23:59 crc kubenswrapper[4820]: I0221 08:23:59.629002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.438138 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.649955 4820 generic.go:334] "Generic (PLEG): container finished" podID="9df49f4c-07ec-4360-88da-765b936357ad" containerID="7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" exitCode=137 Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650279 4820 generic.go:334] "Generic (PLEG): container finished" podID="9df49f4c-07ec-4360-88da-765b936357ad" containerID="51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" exitCode=137 Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650043 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.650322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4"} Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.745811 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856709 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.856940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") pod \"9df49f4c-07ec-4360-88da-765b936357ad\" (UID: \"9df49f4c-07ec-4360-88da-765b936357ad\") " Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.860568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs" (OuterVolumeSpecName: "logs") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.863936 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.864138 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k" (OuterVolumeSpecName: "kube-api-access-r484k") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "kube-api-access-r484k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.883763 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts" (OuterVolumeSpecName: "scripts") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.912638 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data" (OuterVolumeSpecName: "config-data") pod "9df49f4c-07ec-4360-88da-765b936357ad" (UID: "9df49f4c-07ec-4360-88da-765b936357ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960080 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960125 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df49f4c-07ec-4360-88da-765b936357ad-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960134 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9df49f4c-07ec-4360-88da-765b936357ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960145 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r484k\" (UniqueName: \"kubernetes.io/projected/9df49f4c-07ec-4360-88da-765b936357ad-kube-api-access-r484k\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:01 crc kubenswrapper[4820]: I0221 08:24:01.960155 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9df49f4c-07ec-4360-88da-765b936357ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663001 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2164709-4568-4dea-8421-e4d863e18ac3" containerID="6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" exitCode=137 Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663551 4820 generic.go:334] "Generic (PLEG): container finished" podID="a2164709-4568-4dea-8421-e4d863e18ac3" containerID="02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" exitCode=137 Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.663726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.665867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bd57fd8f-854qq" event={"ID":"9df49f4c-07ec-4360-88da-765b936357ad","Type":"ContainerDied","Data":"ca15dd4d00424e10b16c9315ace884ed75b2fbad9a42602661e866daf6703ced"} Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.665908 4820 scope.go:117] "RemoveContainer" containerID="7bfb356ca82f4b2916ff6a0b54de9d3657db7a429721701c2e2de36bac41f97d" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.666092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bd57fd8f-854qq" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.718822 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.729010 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66bd57fd8f-854qq"] Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.862418 4820 scope.go:117] "RemoveContainer" containerID="51ddb0e9c8f79add878fda4cfb5205e85924ad2e3a1fcb138329d5b07f06aef4" Feb 21 08:24:02 crc kubenswrapper[4820]: I0221 08:24:02.954324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.089859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090587 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090654 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090771 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090803 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs" (OuterVolumeSpecName: "logs") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.090890 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") pod \"a2164709-4568-4dea-8421-e4d863e18ac3\" (UID: \"a2164709-4568-4dea-8421-e4d863e18ac3\") " Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.092036 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2164709-4568-4dea-8421-e4d863e18ac3-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.095861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7" (OuterVolumeSpecName: "kube-api-access-dx6n7") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "kube-api-access-dx6n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.114768 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.115671 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data" (OuterVolumeSpecName: "config-data") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.118477 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts" (OuterVolumeSpecName: "scripts") pod "a2164709-4568-4dea-8421-e4d863e18ac3" (UID: "a2164709-4568-4dea-8421-e4d863e18ac3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194392 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2164709-4568-4dea-8421-e4d863e18ac3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194430 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194441 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx6n7\" (UniqueName: \"kubernetes.io/projected/a2164709-4568-4dea-8421-e4d863e18ac3-kube-api-access-dx6n7\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.194451 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2164709-4568-4dea-8421-e4d863e18ac3-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.676707 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9bd7b79c-txbkn" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.676708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9bd7b79c-txbkn" event={"ID":"a2164709-4568-4dea-8421-e4d863e18ac3","Type":"ContainerDied","Data":"4eb3448883c497758beeea4960713d6d2bc637bf465fa9c8ccfeb69d503fe899"} Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.677856 4820 scope.go:117] "RemoveContainer" containerID="6c3446c942e2e02a71e424bada50093bc63fb5c599afaea588dc3734e7910585" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.715886 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df49f4c-07ec-4360-88da-765b936357ad" path="/var/lib/kubelet/pods/9df49f4c-07ec-4360-88da-765b936357ad/volumes" Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.716492 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.722639 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f9bd7b79c-txbkn"] Feb 21 08:24:03 crc kubenswrapper[4820]: I0221 08:24:03.824460 4820 scope.go:117] "RemoveContainer" containerID="02f7f206cc2706c22dcd8b2c2016f4fd218e9d1d76d465ddbb0f8a0b4070f3d3" Feb 21 08:24:05 crc kubenswrapper[4820]: I0221 08:24:05.714312 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" path="/var/lib/kubelet/pods/a2164709-4568-4dea-8421-e4d863e18ac3/volumes" Feb 21 08:24:11 crc kubenswrapper[4820]: I0221 08:24:11.437690 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:21 crc kubenswrapper[4820]: I0221 08:24:21.437547 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d844c64f6-dltxp" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.102:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.102:8443: connect: connection refused" Feb 21 08:24:21 crc kubenswrapper[4820]: I0221 08:24:21.438198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:25 crc kubenswrapper[4820]: I0221 08:24:25.882073 4820 generic.go:334] "Generic (PLEG): container finished" podID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerID="6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" exitCode=137 Feb 21 08:24:25 crc kubenswrapper[4820]: I0221 08:24:25.882163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b"} Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.025877 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135055 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135120 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.135547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") pod \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\" (UID: \"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9\") " Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.136119 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs" (OuterVolumeSpecName: "logs") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.140413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.147996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j" (OuterVolumeSpecName: "kube-api-access-8fs8j") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "kube-api-access-8fs8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.158321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data" (OuterVolumeSpecName: "config-data") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.158576 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts" (OuterVolumeSpecName: "scripts") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.162727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.180825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" (UID: "65a2c389-bef1-4bb6-9fe1-0f52c63d59b9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238117 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238840 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238908 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.238963 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239135 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs8j\" (UniqueName: \"kubernetes.io/projected/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-kube-api-access-8fs8j\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239203 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.239305 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d844c64f6-dltxp" event={"ID":"65a2c389-bef1-4bb6-9fe1-0f52c63d59b9","Type":"ContainerDied","Data":"b8979ed7b663edbb899b5b453daac362045e3fab6583881f796d8f5da1b726a5"} Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892100 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d844c64f6-dltxp" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.892103 4820 scope.go:117] "RemoveContainer" containerID="b7d83d8a0128ca9fb5d4d67e16ad5bccc9b6c3d4157fa7c9734060c3e64a0d5c" Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.930982 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:24:26 crc kubenswrapper[4820]: I0221 08:24:26.939448 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d844c64f6-dltxp"] Feb 21 08:24:27 crc kubenswrapper[4820]: I0221 08:24:27.049028 4820 scope.go:117] "RemoveContainer" containerID="6a6ef780cb12a9051e7dc809f048f0ffcdc59aaa7e6d67885a1007a776e9e38b" Feb 21 08:24:27 crc kubenswrapper[4820]: I0221 08:24:27.707477 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" path="/var/lib/kubelet/pods/65a2c389-bef1-4bb6-9fe1-0f52c63d59b9/volumes" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.746899 4820 scope.go:117] "RemoveContainer" containerID="135e969cc483fae03c701729ed4ef0eebb1f47660c935ededa411b6c1ad4f1b4" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.804338 4820 scope.go:117] "RemoveContainer" containerID="501babb59c40b46545eba4aa654f940bb7c87c7e466ae9ff90824f2b1d71dea7" Feb 21 08:24:28 crc kubenswrapper[4820]: I0221 08:24:28.822161 4820 scope.go:117] "RemoveContainer" containerID="2a26fe99fe0c30f653a1d68961945cc0a0de3158933b3e891813aa05adae4ac5" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.290993 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292017 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292036 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292061 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292096 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292105 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292126 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292147 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: E0221 08:24:36.292166 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292174 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292418 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292436 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2164709-4568-4dea-8421-e4d863e18ac3" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292461 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292472 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon-log" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292482 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df49f4c-07ec-4360-88da-765b936357ad" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.292502 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a2c389-bef1-4bb6-9fe1-0f52c63d59b9" containerName="horizon" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.293725 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.319021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.426970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427595 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.427682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529773 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.529938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.530349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-logs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.530799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-scripts\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.531071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-config-data\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.536646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-combined-ca-bundle\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.536962 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-secret-key\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.539955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-horizon-tls-certs\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.549409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgl9h\" (UniqueName: \"kubernetes.io/projected/d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6-kube-api-access-zgl9h\") pod \"horizon-5879b888bd-q5njq\" (UID: \"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6\") " pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:36 crc kubenswrapper[4820]: I0221 08:24:36.624363 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.151978 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5879b888bd-q5njq"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.824051 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.825471 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.856983 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.969386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.969447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.972324 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.973664 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.978495 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 21 08:24:37 crc kubenswrapper[4820]: I0221 08:24:37.988650 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"81f3f666ea55d7ac8ed97b5b8c94c5056cd164cac1b5985c3ae74c4ae1db9fb3"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"cb0ce49677d9242b28528d708c0088fbec55e49300542c5fc31e60fdd3adf149"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.018341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5879b888bd-q5njq" event={"ID":"d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6","Type":"ContainerStarted","Data":"95dbf117f220fe71931283c3aedab5204239edd50b563006819dc9a1c8df5cac"} Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.050996 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5879b888bd-q5njq" podStartSLOduration=2.050972845 podStartE2EDuration="2.050972845s" podCreationTimestamp="2026-02-21 08:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:24:38.045452526 +0000 UTC m=+5853.078536724" watchObservedRunningTime="2026-02-21 08:24:38.050972845 +0000 UTC m=+5853.084057043" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071659 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.071869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.073123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.091546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"heat-db-create-s4h7q\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.144293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.175285 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.175425 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.176460 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.200719 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"heat-029a-account-create-update-bm98m\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.301651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.642586 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:24:38 crc kubenswrapper[4820]: I0221 08:24:38.797691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:24:39 crc kubenswrapper[4820]: I0221 08:24:39.172921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerStarted","Data":"165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341"} Feb 21 08:24:39 crc kubenswrapper[4820]: I0221 08:24:39.174772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerStarted","Data":"09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396"} Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.184606 4820 generic.go:334] "Generic (PLEG): container finished" podID="84358593-717e-4372-b9bb-28a34fb65b6e" containerID="68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7" exitCode=0 Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.184780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerDied","Data":"68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7"} Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.187047 4820 generic.go:334] "Generic (PLEG): container finished" podID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerID="a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560" exitCode=0 Feb 21 08:24:40 crc kubenswrapper[4820]: I0221 08:24:40.187082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerDied","Data":"a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560"} Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.579090 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.584724 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") pod \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") pod \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\" (UID: \"d69513ef-06f3-4770-9e89-5b7b7fe873b2\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.763866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") pod \"84358593-717e-4372-b9bb-28a34fb65b6e\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") pod \"84358593-717e-4372-b9bb-28a34fb65b6e\" (UID: \"84358593-717e-4372-b9bb-28a34fb65b6e\") " Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764442 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84358593-717e-4372-b9bb-28a34fb65b6e" (UID: "84358593-717e-4372-b9bb-28a34fb65b6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.764444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d69513ef-06f3-4770-9e89-5b7b7fe873b2" (UID: "d69513ef-06f3-4770-9e89-5b7b7fe873b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.765719 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69513ef-06f3-4770-9e89-5b7b7fe873b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.765837 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84358593-717e-4372-b9bb-28a34fb65b6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.770856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm" (OuterVolumeSpecName: "kube-api-access-r89jm") pod "84358593-717e-4372-b9bb-28a34fb65b6e" (UID: "84358593-717e-4372-b9bb-28a34fb65b6e"). InnerVolumeSpecName "kube-api-access-r89jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.775016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw" (OuterVolumeSpecName: "kube-api-access-5v4kw") pod "d69513ef-06f3-4770-9e89-5b7b7fe873b2" (UID: "d69513ef-06f3-4770-9e89-5b7b7fe873b2"). InnerVolumeSpecName "kube-api-access-5v4kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.867332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r89jm\" (UniqueName: \"kubernetes.io/projected/84358593-717e-4372-b9bb-28a34fb65b6e-kube-api-access-r89jm\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:41 crc kubenswrapper[4820]: I0221 08:24:41.867371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v4kw\" (UniqueName: \"kubernetes.io/projected/d69513ef-06f3-4770-9e89-5b7b7fe873b2-kube-api-access-5v4kw\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-029a-account-create-update-bm98m" event={"ID":"84358593-717e-4372-b9bb-28a34fb65b6e","Type":"ContainerDied","Data":"165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341"} Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226465 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165c9d470f67a6f385fa18fa3a35a82b41d39fb4f9053462a397aab1c8171341" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.226515 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-029a-account-create-update-bm98m" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s4h7q" event={"ID":"d69513ef-06f3-4770-9e89-5b7b7fe873b2","Type":"ContainerDied","Data":"09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396"} Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232662 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e62968a3a56c593b47a9085a8e6e2071ff114c2072c4d813952330f89c8396" Feb 21 08:24:42 crc kubenswrapper[4820]: I0221 08:24:42.232710 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s4h7q" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.004803 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:43 crc kubenswrapper[4820]: E0221 08:24:43.005295 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005316 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: E0221 08:24:43.005647 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005656 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005865 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" containerName="mariadb-account-create-update" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.005883 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" containerName="mariadb-database-create" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.006666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.008608 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.009814 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-b8zxk" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.015992 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.198787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.198887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.199006 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301011 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.301158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.308164 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.308309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.320722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"heat-db-sync-27sgb\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.331854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:43 crc kubenswrapper[4820]: I0221 08:24:43.855602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:24:44 crc kubenswrapper[4820]: I0221 08:24:44.252299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerStarted","Data":"ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a"} Feb 21 08:24:46 crc kubenswrapper[4820]: I0221 08:24:46.625336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:46 crc kubenswrapper[4820]: I0221 08:24:46.625869 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.050854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.060707 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3bef-account-create-update-7n4bl"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.069299 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.078382 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jzbnq"] Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.710647 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0e7c5f-32ab-470c-a8eb-b0067af1ce22" path="/var/lib/kubelet/pods/4e0e7c5f-32ab-470c-a8eb-b0067af1ce22/volumes" Feb 21 08:24:49 crc kubenswrapper[4820]: I0221 08:24:49.711372 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80901dca-016d-4c52-b87d-f953b0689f1a" path="/var/lib/kubelet/pods/80901dca-016d-4c52-b87d-f953b0689f1a/volumes" Feb 21 08:24:54 crc kubenswrapper[4820]: I0221 08:24:54.457548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerStarted","Data":"14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245"} Feb 21 08:24:54 crc kubenswrapper[4820]: I0221 08:24:54.483890 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-27sgb" podStartSLOduration=2.306692934 podStartE2EDuration="12.483865766s" podCreationTimestamp="2026-02-21 08:24:42 +0000 UTC" firstStartedPulling="2026-02-21 08:24:43.861838868 +0000 UTC m=+5858.894923066" lastFinishedPulling="2026-02-21 08:24:54.0390117 +0000 UTC m=+5869.072095898" observedRunningTime="2026-02-21 08:24:54.478905922 +0000 UTC m=+5869.511990130" watchObservedRunningTime="2026-02-21 08:24:54.483865766 +0000 UTC m=+5869.516949964" Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.477750 4820 generic.go:334] "Generic (PLEG): container finished" podID="898015a2-3ff9-4c61-b164-4a6961c44884" containerID="14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245" exitCode=0 Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.478394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerDied","Data":"14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245"} Feb 21 08:24:56 crc kubenswrapper[4820]: I0221 08:24:56.629882 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5879b888bd-q5njq" podUID="d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.031560 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.041438 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6768b"] Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.710622 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c29c61-83db-423e-8e56-52c1637985e2" path="/var/lib/kubelet/pods/46c29c61-83db-423e-8e56-52c1637985e2/volumes" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.812610 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.903987 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") pod \"898015a2-3ff9-4c61-b164-4a6961c44884\" (UID: \"898015a2-3ff9-4c61-b164-4a6961c44884\") " Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.909599 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2" (OuterVolumeSpecName: "kube-api-access-bp5m2") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "kube-api-access-bp5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.933150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:57 crc kubenswrapper[4820]: I0221 08:24:57.974610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data" (OuterVolumeSpecName: "config-data") pod "898015a2-3ff9-4c61-b164-4a6961c44884" (UID: "898015a2-3ff9-4c61-b164-4a6961c44884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006773 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006814 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898015a2-3ff9-4c61-b164-4a6961c44884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.006825 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5m2\" (UniqueName: \"kubernetes.io/projected/898015a2-3ff9-4c61-b164-4a6961c44884-kube-api-access-bp5m2\") on node \"crc\" DevicePath \"\"" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.494953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-27sgb" event={"ID":"898015a2-3ff9-4c61-b164-4a6961c44884","Type":"ContainerDied","Data":"ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a"} Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.494991 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae15f4433fafcf52b94ff7e1aa91cfc93f28bae13b9100d34ef984ada754ff6a" Feb 21 08:24:58 crc kubenswrapper[4820]: I0221 08:24:58.495101 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-27sgb" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.460790 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:24:59 crc kubenswrapper[4820]: E0221 08:24:59.461317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.461334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.461539 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" containerName="heat-db-sync" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.462343 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.465439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.465749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.466041 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-b8zxk" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.490899 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.535710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.536254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.562979 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.564613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.569806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.576795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.639855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.639954 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.640974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641190 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.641279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.647204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.648383 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.649521 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.651862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.660932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.662939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.674767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"heat-engine-67969b55f7-j9b9h\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.715638 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.743589 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.744158 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.744650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.745767 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.746384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.751513 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.753211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.757530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.760925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"heat-cfnapi-6c98bf9957-8ctvl\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.781690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848138 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848289 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.848324 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.856388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.860030 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.860815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.868610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"heat-api-78db975b86-shg8k\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.887893 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:24:59 crc kubenswrapper[4820]: I0221 08:24:59.925794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.319738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.323191 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d104918_3b6f_4543_9ca3_0ae595be78a2.slice/crio-dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e WatchSource:0}: Error finding container dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e: Status 404 returned error can't find the container with id dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.442005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.452410 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod626f6f5d_6222_406e_a687_92b74b1c9def.slice/crio-580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c WatchSource:0}: Error finding container 580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c: Status 404 returned error can't find the container with id 580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.556547 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerStarted","Data":"dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e"} Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.558286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerStarted","Data":"580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c"} Feb 21 08:25:00 crc kubenswrapper[4820]: I0221 08:25:00.592730 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:25:00 crc kubenswrapper[4820]: W0221 08:25:00.593783 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eba12fb_3cb7_4830_9722_9c9e6ab46002.slice/crio-3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab WatchSource:0}: Error finding container 3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab: Status 404 returned error can't find the container with id 3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.573729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerStarted","Data":"3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab"} Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.575692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerStarted","Data":"44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5"} Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.575791 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:25:01 crc kubenswrapper[4820]: I0221 08:25:01.596924 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-67969b55f7-j9b9h" podStartSLOduration=2.596904532 podStartE2EDuration="2.596904532s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:01.594917928 +0000 UTC m=+5876.628002126" watchObservedRunningTime="2026-02-21 08:25:01.596904532 +0000 UTC m=+5876.629988730" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.589308 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.591397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.600986 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.640326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerStarted","Data":"bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115"} Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.640745 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.655323 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.656403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.682884 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.682967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.683029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.683071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.703416 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" podStartSLOduration=2.182800497 podStartE2EDuration="7.703390707s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="2026-02-21 08:25:00.455752256 +0000 UTC m=+5875.488836444" lastFinishedPulling="2026-02-21 08:25:05.976342456 +0000 UTC m=+5881.009426654" observedRunningTime="2026-02-21 08:25:06.687808055 +0000 UTC m=+5881.720892253" watchObservedRunningTime="2026-02-21 08:25:06.703390707 +0000 UTC m=+5881.736474905" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.704914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.754212 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.757214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.761820 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.784784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.784941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.785304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.806180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data-custom\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.808868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-config-data\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.817955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnnl\" (UniqueName: \"kubernetes.io/projected/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-kube-api-access-spnnl\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.819192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ac827-2c89-4d1b-afc3-a5bd668b5d60-combined-ca-bundle\") pod \"heat-engine-7fbc8dc6-rvrvw\" (UID: \"f98ac827-2c89-4d1b-afc3-a5bd668b5d60\") " pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887231 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.887472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.892106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.892146 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.893363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.909442 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.910606 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"heat-cfnapi-7c5647759f-rwklt\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.980651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.988991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.989192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.992955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.996110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:06 crc kubenswrapper[4820]: I0221 08:25:06.996959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.008388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"heat-api-79f7554c5d-dxxm7\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.081380 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.709182 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.724019 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.732877 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.734030 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.735995 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.736169 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.743450 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.768207 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.769897 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.774913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.775080 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.782559 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807072 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.807283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909547 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.909714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.916132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data-custom\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.917091 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-internal-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.924864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-combined-ca-bundle\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.929770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-config-data\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.930049 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b82e21-7221-4043-b9a8-5ac5853acaa1-public-tls-certs\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:07 crc kubenswrapper[4820]: I0221 08:25:07.930697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7c5\" (UniqueName: \"kubernetes.io/projected/55b82e21-7221-4043-b9a8-5ac5853acaa1-kube-api-access-8d7c5\") pod \"heat-api-c9d48c7f5-9ghjf\" (UID: \"55b82e21-7221-4043-b9a8-5ac5853acaa1\") " pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011350 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.011808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.014492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-public-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.014739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-combined-ca-bundle\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.015442 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data-custom\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.015565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-internal-tls-certs\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.019992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f86beb-e638-4e60-a435-b09e2c01e733-config-data\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.031012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwpg\" (UniqueName: \"kubernetes.io/projected/c1f86beb-e638-4e60-a435-b09e2c01e733-kube-api-access-smwpg\") pod \"heat-cfnapi-d46b7f59f-tgv4t\" (UID: \"c1f86beb-e638-4e60-a435-b09e2c01e733\") " pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.057798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.091121 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:08 crc kubenswrapper[4820]: I0221 08:25:08.657206 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" containerID="cri-o://bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" gracePeriod=60 Feb 21 08:25:10 crc kubenswrapper[4820]: I0221 08:25:10.674561 4820 generic.go:334] "Generic (PLEG): container finished" podID="626f6f5d-6222-406e-a687-92b74b1c9def" containerID="bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" exitCode=0 Feb 21 08:25:10 crc kubenswrapper[4820]: I0221 08:25:10.674704 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerDied","Data":"bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115"} Feb 21 08:25:11 crc kubenswrapper[4820]: I0221 08:25:11.630410 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5879b888bd-q5njq" podUID="d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 08:25:12 crc kubenswrapper[4820]: I0221 08:25:12.992731 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.111655 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.112127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") pod \"626f6f5d-6222-406e-a687-92b74b1c9def\" (UID: \"626f6f5d-6222-406e-a687-92b74b1c9def\") " Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.126541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b" (OuterVolumeSpecName: "kube-api-access-5cj8b") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "kube-api-access-5cj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.126666 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.153217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.207313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data" (OuterVolumeSpecName: "config-data") pod "626f6f5d-6222-406e-a687-92b74b1c9def" (UID: "626f6f5d-6222-406e-a687-92b74b1c9def"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215479 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215515 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj8b\" (UniqueName: \"kubernetes.io/projected/626f6f5d-6222-406e-a687-92b74b1c9def-kube-api-access-5cj8b\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215529 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.215537 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/626f6f5d-6222-406e-a687-92b74b1c9def-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.355736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.369655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c9d48c7f5-9ghjf"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.393770 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d46b7f59f-tgv4t"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.428777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fbc8dc6-rvrvw"] Feb 21 08:25:13 crc kubenswrapper[4820]: W0221 08:25:13.430844 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64841624_ecc9_4a68_b2f8_294f328c7ce3.slice/crio-c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f WatchSource:0}: Error finding container c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f: Status 404 returned error can't find the container with id c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.446807 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.701157 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78db975b86-shg8k" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" containerID="cri-o://9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" gracePeriod=60 Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerStarted","Data":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714347 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"ef07e0c03be2d134b2f51dc8fc1906a244df0aab0718ffcaa30aac6f0d1acf91"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.714370 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.725826 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c9d48c7f5-9ghjf" event={"ID":"55b82e21-7221-4043-b9a8-5ac5853acaa1","Type":"ContainerStarted","Data":"255549b7249f1f90ac63836dd47adfe474769411d294df20d80dd46a6de64a12"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.727595 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78db975b86-shg8k" podStartSLOduration=3.481834526 podStartE2EDuration="14.727575329s" podCreationTimestamp="2026-02-21 08:24:59 +0000 UTC" firstStartedPulling="2026-02-21 08:25:00.596493094 +0000 UTC m=+5875.629577292" lastFinishedPulling="2026-02-21 08:25:11.842233897 +0000 UTC m=+5886.875318095" observedRunningTime="2026-02-21 08:25:13.723850118 +0000 UTC m=+5888.756934316" watchObservedRunningTime="2026-02-21 08:25:13.727575329 +0000 UTC m=+5888.760659537" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" event={"ID":"626f6f5d-6222-406e-a687-92b74b1c9def","Type":"ContainerDied","Data":"580727d40e4f676d3cfe0c8263cb0f5ca6fb6c011c3df97d62e9d7b25b57368c"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728272 4820 scope.go:117] "RemoveContainer" containerID="bf999cf0087b347dc47a32148fa88821e0546ffc0fc80fd764755c63b7d8a115" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.728271 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c98bf9957-8ctvl" Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.730580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fbc8dc6-rvrvw" event={"ID":"f98ac827-2c89-4d1b-afc3-a5bd668b5d60","Type":"ContainerStarted","Data":"2a1b2d15de70e7c7f7ff1c4fb21ff8ae7326cfda54af2d2ad996dcb3e37e6abd"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.733152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" event={"ID":"c1f86beb-e638-4e60-a435-b09e2c01e733","Type":"ContainerStarted","Data":"fb5b8e08da6c0049ee03bd95f2b9b6aadfa4dea735ad1168d53153507b080c6b"} Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.808211 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:13 crc kubenswrapper[4820]: I0221 08:25:13.818539 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c98bf9957-8ctvl"] Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.712487 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" path="/var/lib/kubelet/pods/626f6f5d-6222-406e-a687-92b74b1c9def/volumes" Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.756871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.757068 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.759062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.771412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c9d48c7f5-9ghjf" event={"ID":"55b82e21-7221-4043-b9a8-5ac5853acaa1","Type":"ContainerStarted","Data":"12a1b6cc00c6a599e5e9e9f51eafa296721c5391c5c8080e55a9dc942d5ebde8"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.776031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fbc8dc6-rvrvw" event={"ID":"f98ac827-2c89-4d1b-afc3-a5bd668b5d60","Type":"ContainerStarted","Data":"17decb4e274786f3b92565e5c3decf718376cb107e73d2452ad7162b9a1683a5"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.778720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" event={"ID":"c1f86beb-e638-4e60-a435-b09e2c01e733","Type":"ContainerStarted","Data":"991b463bf4f27fff49ec8d5a638b88b0990c3d9bc28c012c1da88d4c22ff310b"} Feb 21 08:25:15 crc kubenswrapper[4820]: I0221 08:25:15.860645 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podStartSLOduration=9.860623042 podStartE2EDuration="9.860623042s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:15.844805574 +0000 UTC m=+5890.877889772" watchObservedRunningTime="2026-02-21 08:25:15.860623042 +0000 UTC m=+5890.893707240" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.832606 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-79f7554c5d-dxxm7" podStartSLOduration=10.83258284 podStartE2EDuration="10.83258284s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.815146628 +0000 UTC m=+5891.848230836" watchObservedRunningTime="2026-02-21 08:25:16.83258284 +0000 UTC m=+5891.865667058" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.843869 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" podStartSLOduration=9.843849034 podStartE2EDuration="9.843849034s" podCreationTimestamp="2026-02-21 08:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.841376857 +0000 UTC m=+5891.874461055" watchObservedRunningTime="2026-02-21 08:25:16.843849034 +0000 UTC m=+5891.876933232" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.890652 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-c9d48c7f5-9ghjf" podStartSLOduration=9.890628361 podStartE2EDuration="9.890628361s" podCreationTimestamp="2026-02-21 08:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.86842612 +0000 UTC m=+5891.901510318" watchObservedRunningTime="2026-02-21 08:25:16.890628361 +0000 UTC m=+5891.923712569" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.892042 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7fbc8dc6-rvrvw" podStartSLOduration=10.892032609 podStartE2EDuration="10.892032609s" podCreationTimestamp="2026-02-21 08:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:25:16.882969393 +0000 UTC m=+5891.916053601" watchObservedRunningTime="2026-02-21 08:25:16.892032609 +0000 UTC m=+5891.925116807" Feb 21 08:25:16 crc kubenswrapper[4820]: I0221 08:25:16.910879 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.082704 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.811866 4820 generic.go:334] "Generic (PLEG): container finished" podID="422684e4-6de9-44af-9684-9cc724395af6" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" exitCode=1 Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.812002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a"} Feb 21 08:25:17 crc kubenswrapper[4820]: I0221 08:25:17.812747 4820 scope.go:117] "RemoveContainer" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.058424 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.091724 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.349209 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.827589 4820 generic.go:334] "Generic (PLEG): container finished" podID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" exitCode=1 Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.827655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16"} Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.828635 4820 scope.go:117] "RemoveContainer" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" Feb 21 08:25:18 crc kubenswrapper[4820]: I0221 08:25:18.830622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerStarted","Data":"7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7"} Feb 21 08:25:19 crc kubenswrapper[4820]: I0221 08:25:19.806659 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:25:19 crc kubenswrapper[4820]: I0221 08:25:19.843431 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.095937 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5879b888bd-q5njq" Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154377 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154707 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" containerID="cri-o://31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" gracePeriod=30 Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.154912 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" containerID="cri-o://75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" gracePeriod=30 Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.855348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerStarted","Data":"bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610"} Feb 21 08:25:20 crc kubenswrapper[4820]: I0221 08:25:20.855899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.396115 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867757 4820 generic.go:334] "Generic (PLEG): container finished" podID="422684e4-6de9-44af-9684-9cc724395af6" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" exitCode=1 Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7"} Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.867887 4820 scope.go:117] "RemoveContainer" containerID="bd3749495127625ec0880c48cfd9fe17d8523f6ea8bcf84108e35c4c943d376a" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.868700 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:21 crc kubenswrapper[4820]: E0221 08:25:21.869020 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c5647759f-rwklt_openstack(422684e4-6de9-44af-9684-9cc724395af6)\"" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podUID="422684e4-6de9-44af-9684-9cc724395af6" Feb 21 08:25:21 crc kubenswrapper[4820]: I0221 08:25:21.981346 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.878135 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:22 crc kubenswrapper[4820]: E0221 08:25:22.878692 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c5647759f-rwklt_openstack(422684e4-6de9-44af-9684-9cc724395af6)\"" pod="openstack/heat-cfnapi-7c5647759f-rwklt" podUID="422684e4-6de9-44af-9684-9cc724395af6" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.879955 4820 generic.go:334] "Generic (PLEG): container finished" podID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" exitCode=1 Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.879980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610"} Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.880008 4820 scope.go:117] "RemoveContainer" containerID="76bacff54433606319f6320cc7021ae65ce92e972f8e89e2cb4a54f35bfbab16" Feb 21 08:25:22 crc kubenswrapper[4820]: I0221 08:25:22.880347 4820 scope.go:117] "RemoveContainer" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" Feb 21 08:25:22 crc kubenswrapper[4820]: E0221 08:25:22.880539 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-79f7554c5d-dxxm7_openstack(64841624-ecc9-4a68-b2f8-294f328c7ce3)\"" pod="openstack/heat-api-79f7554c5d-dxxm7" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.299819 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54136->10.217.1.103:8443: read: connection reset by peer" Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.892579 4820 generic.go:334] "Generic (PLEG): container finished" podID="81b52673-da5b-421f-be4c-d5608c8d82df" containerID="75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" exitCode=0 Feb 21 08:25:23 crc kubenswrapper[4820]: I0221 08:25:23.892621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.374899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d46b7f59f-tgv4t" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.440764 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.499214 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-c9d48c7f5-9ghjf" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.553670 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.856212 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c5647759f-rwklt" event={"ID":"422684e4-6de9-44af-9684-9cc724395af6","Type":"ContainerDied","Data":"ef07e0c03be2d134b2f51dc8fc1906a244df0aab0718ffcaa30aac6f0d1acf91"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c5647759f-rwklt" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.906149 4820 scope.go:117] "RemoveContainer" containerID="7a450d3c591f7ab2d8d12f40055886bf26abdc1bc260b28a89d0dbfc9e3f3eb7" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.907982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79f7554c5d-dxxm7" event={"ID":"64841624-ecc9-4a68-b2f8-294f328c7ce3","Type":"ContainerDied","Data":"c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f"} Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.908015 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c189762cc570e10f89921aa65a2623f4b1a5e93ead0db68057c449af5c01f49f" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983597 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.983652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") pod \"422684e4-6de9-44af-9684-9cc724395af6\" (UID: \"422684e4-6de9-44af-9684-9cc724395af6\") " Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.988929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8" (OuterVolumeSpecName: "kube-api-access-lx4n8") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "kube-api-access-lx4n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:24 crc kubenswrapper[4820]: I0221 08:25:24.990017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.009576 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.013507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.052626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data" (OuterVolumeSpecName: "config-data") pod "422684e4-6de9-44af-9684-9cc724395af6" (UID: "422684e4-6de9-44af-9684-9cc724395af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085703 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.085847 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") pod \"64841624-ecc9-4a68-b2f8-294f328c7ce3\" (UID: \"64841624-ecc9-4a68-b2f8-294f328c7ce3\") " Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086376 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086400 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086414 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx4n8\" (UniqueName: \"kubernetes.io/projected/422684e4-6de9-44af-9684-9cc724395af6-kube-api-access-lx4n8\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.086437 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422684e4-6de9-44af-9684-9cc724395af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.089457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk" (OuterVolumeSpecName: "kube-api-access-v6rhk") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "kube-api-access-v6rhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.090331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.112122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.134967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data" (OuterVolumeSpecName: "config-data") pod "64841624-ecc9-4a68-b2f8-294f328c7ce3" (UID: "64841624-ecc9-4a68-b2f8-294f328c7ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187932 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187968 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6rhk\" (UniqueName: \"kubernetes.io/projected/64841624-ecc9-4a68-b2f8-294f328c7ce3-kube-api-access-v6rhk\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187977 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.187986 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64841624-ecc9-4a68-b2f8-294f328c7ce3-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.236102 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.244597 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7c5647759f-rwklt"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.710206 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422684e4-6de9-44af-9684-9cc724395af6" path="/var/lib/kubelet/pods/422684e4-6de9-44af-9684-9cc724395af6/volumes" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.917083 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79f7554c5d-dxxm7" Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.944290 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:25 crc kubenswrapper[4820]: I0221 08:25:25.958197 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-79f7554c5d-dxxm7"] Feb 21 08:25:26 crc kubenswrapper[4820]: I0221 08:25:26.940393 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7fbc8dc6-rvrvw" Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.011939 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.012505 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" containerID="cri-o://44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" gracePeriod=60 Feb 21 08:25:27 crc kubenswrapper[4820]: I0221 08:25:27.781816 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" path="/var/lib/kubelet/pods/64841624-ecc9-4a68-b2f8-294f328c7ce3/volumes" Feb 21 08:25:28 crc kubenswrapper[4820]: I0221 08:25:28.979333 4820 scope.go:117] "RemoveContainer" containerID="8384371cb1cb59ce68f65650414ed9165b7cc3f363b2fda166fcb245381ffb64" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.072037 4820 scope.go:117] "RemoveContainer" containerID="9c8352c44b67eda0f166f0687429790e5bd49b1d98c898e2089a6c9be067a4f4" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.129062 4820 scope.go:117] "RemoveContainer" containerID="150cef9ed56fe3eb3dae1713514ca1727eaea3bed5edf04307bc072317b7eac1" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.444447 4820 scope.go:117] "RemoveContainer" containerID="2d67b7bb0de25794d2af04a8fdecff08fd5cb66963010072ec396cf1f0a89430" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.472311 4820 scope.go:117] "RemoveContainer" containerID="e88ec1f0511faea63b1b890af60d3ecbf225488e293807f27ac476bd20e4d2af" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.546071 4820 scope.go:117] "RemoveContainer" containerID="3e9323b3b0ecd38f4bd6801e5bdf943a91f811adc414d781d648c705fbf53dd9" Feb 21 08:25:29 crc kubenswrapper[4820]: I0221 08:25:29.719211 4820 scope.go:117] "RemoveContainer" containerID="fd2dfabc6a845c58169feb78a970683856b5e0b8c05305224b62a62196765d9f" Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.783847 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.785578 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.787370 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:29 crc kubenswrapper[4820]: E0221 08:25:29.787404 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:31 crc kubenswrapper[4820]: I0221 08:25:31.557787 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.783755 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.786142 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.787492 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:39 crc kubenswrapper[4820]: E0221 08:25:39.787562 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:41 crc kubenswrapper[4820]: I0221 08:25:41.558302 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-547899c658-2788v" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.103:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.103:8443: connect: connection refused" Feb 21 08:25:41 crc kubenswrapper[4820]: I0221 08:25:41.558749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:43 crc kubenswrapper[4820]: I0221 08:25:43.755835 4820 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod626f6f5d-6222-406e-a687-92b74b1c9def"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod626f6f5d-6222-406e-a687-92b74b1c9def] : Timed out while waiting for systemd to remove kubepods-besteffort-pod626f6f5d_6222_406e_a687_92b74b1c9def.slice" Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.785525 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.787822 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.791102 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:49 crc kubenswrapper[4820]: E0221 08:25:49.791174 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.141405 4820 generic.go:334] "Generic (PLEG): container finished" podID="81b52673-da5b-421f-be4c-d5608c8d82df" containerID="31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" exitCode=137 Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.141485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a"} Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.257092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338691 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338885 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.338940 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.339084 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") pod \"81b52673-da5b-421f-be4c-d5608c8d82df\" (UID: \"81b52673-da5b-421f-be4c-d5608c8d82df\") " Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.340206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs" (OuterVolumeSpecName: "logs") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.344935 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.344968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz" (OuterVolumeSpecName: "kube-api-access-97qhz") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "kube-api-access-97qhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.364431 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts" (OuterVolumeSpecName: "scripts") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.367938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data" (OuterVolumeSpecName: "config-data") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.376634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.390504 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "81b52673-da5b-421f-be4c-d5608c8d82df" (UID: "81b52673-da5b-421f-be4c-d5608c8d82df"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441150 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441180 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81b52673-da5b-421f-be4c-d5608c8d82df-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441190 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441201 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b52673-da5b-421f-be4c-d5608c8d82df-logs\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441219 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qhz\" (UniqueName: \"kubernetes.io/projected/81b52673-da5b-421f-be4c-d5608c8d82df-kube-api-access-97qhz\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:51 crc kubenswrapper[4820]: I0221 08:25:51.441242 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81b52673-da5b-421f-be4c-d5608c8d82df-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547899c658-2788v" event={"ID":"81b52673-da5b-421f-be4c-d5608c8d82df","Type":"ContainerDied","Data":"eff2d04aa677852d296ff8fc2a98555932014b77b70e9d62fecd2afd6b553dbd"} Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154680 4820 scope.go:117] "RemoveContainer" containerID="75dd932712359b9c384bfe3ca353a892eb8c5cc411b34053a1addd1db3cfb25c" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.154745 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547899c658-2788v" Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.183868 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.192472 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-547899c658-2788v"] Feb 21 08:25:52 crc kubenswrapper[4820]: I0221 08:25:52.352813 4820 scope.go:117] "RemoveContainer" containerID="31f0aa87caeedf0d07754a1c9bacdd7401160e05e285d884534c83101f67a23a" Feb 21 08:25:53 crc kubenswrapper[4820]: I0221 08:25:53.707522 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" path="/var/lib/kubelet/pods/81b52673-da5b-421f-be4c-d5608c8d82df/volumes" Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.784305 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.787020 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.788480 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:25:59 crc kubenswrapper[4820]: E0221 08:25:59.788524 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.055069 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.070016 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.072490 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w9mkt"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.078783 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-30a5-account-create-update-vlqzg"] Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.710871 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526239a3-9756-4dd4-9e38-6474bd1b2709" path="/var/lib/kubelet/pods/526239a3-9756-4dd4-9e38-6474bd1b2709/volumes" Feb 21 08:26:03 crc kubenswrapper[4820]: I0221 08:26:03.711532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19f4a26-20d3-44b1-a159-3fd72a92e68f" path="/var/lib/kubelet/pods/b19f4a26-20d3-44b1-a159-3fd72a92e68f/volumes" Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.784445 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.786367 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.787646 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:09 crc kubenswrapper[4820]: E0221 08:26:09.787722 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:13 crc kubenswrapper[4820]: I0221 08:26:13.816214 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:26:13 crc kubenswrapper[4820]: I0221 08:26:13.816862 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.280621 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301389 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301455 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.301524 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") pod \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\" (UID: \"6eba12fb-3cb7-4830-9722-9c9e6ab46002\") " Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.319783 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.319872 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx" (OuterVolumeSpecName: "kube-api-access-llnrx") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "kube-api-access-llnrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.342832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365222 4820 generic.go:334] "Generic (PLEG): container finished" podID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" exitCode=137 Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerDied","Data":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78db975b86-shg8k" event={"ID":"6eba12fb-3cb7-4830-9722-9c9e6ab46002","Type":"ContainerDied","Data":"3a9dfdc3820a237d6264d844940fa8cdbf49d936716d3113044384c4c3c6c2ab"} Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365451 4820 scope.go:117] "RemoveContainer" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.365592 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78db975b86-shg8k" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.368973 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data" (OuterVolumeSpecName: "config-data") pod "6eba12fb-3cb7-4830-9722-9c9e6ab46002" (UID: "6eba12fb-3cb7-4830-9722-9c9e6ab46002"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404182 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llnrx\" (UniqueName: \"kubernetes.io/projected/6eba12fb-3cb7-4830-9722-9c9e6ab46002-kube-api-access-llnrx\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404216 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404224 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.404248 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eba12fb-3cb7-4830-9722-9c9e6ab46002-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.422488 4820 scope.go:117] "RemoveContainer" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: E0221 08:26:14.423506 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": container with ID starting with 9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1 not found: ID does not exist" containerID="9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.423551 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1"} err="failed to get container status \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": rpc error: code = NotFound desc = could not find container \"9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1\": container with ID starting with 9398bdddc1cad5a6adf7503dfbb61ed1e6061eac2468f4a8e2e6f8a6dc919db1 not found: ID does not exist" Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.702800 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:26:14 crc kubenswrapper[4820]: I0221 08:26:14.711317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78db975b86-shg8k"] Feb 21 08:26:15 crc kubenswrapper[4820]: I0221 08:26:15.706998 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" path="/var/lib/kubelet/pods/6eba12fb-3cb7-4830-9722-9c9e6ab46002/volumes" Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.784373 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.786457 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.787602 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 21 08:26:19 crc kubenswrapper[4820]: E0221 08:26:19.787650 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-67969b55f7-j9b9h" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:27 crc kubenswrapper[4820]: I0221 08:26:27.482790 4820 generic.go:334] "Generic (PLEG): container finished" podID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" exitCode=137 Feb 21 08:26:27 crc kubenswrapper[4820]: I0221 08:26:27.483075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerDied","Data":"44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5"} Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.029071 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.214813 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215213 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.215935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") pod \"5d104918-3b6f-4543-9ca3-0ae595be78a2\" (UID: \"5d104918-3b6f-4543-9ca3-0ae595be78a2\") " Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.222619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb" (OuterVolumeSpecName: "kube-api-access-s9gdb") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "kube-api-access-s9gdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.227063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.259125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.279344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data" (OuterVolumeSpecName: "config-data") pod "5d104918-3b6f-4543-9ca3-0ae595be78a2" (UID: "5d104918-3b6f-4543-9ca3-0ae595be78a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.318985 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9gdb\" (UniqueName: \"kubernetes.io/projected/5d104918-3b6f-4543-9ca3-0ae595be78a2-kube-api-access-s9gdb\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319020 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319032 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.319041 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d104918-3b6f-4543-9ca3-0ae595be78a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.493967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67969b55f7-j9b9h" event={"ID":"5d104918-3b6f-4543-9ca3-0ae595be78a2","Type":"ContainerDied","Data":"dbf92b41a5a660ca8950c0f9f25b1cc977262a38e0d285fc433fc069c9b1436e"} Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.494022 4820 scope.go:117] "RemoveContainer" containerID="44ce42be9e7ead164ec3b1693d3ec8781cc14d49223a939af7317534232108d5" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.494057 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67969b55f7-j9b9h" Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.540101 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:26:28 crc kubenswrapper[4820]: I0221 08:26:28.552868 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-67969b55f7-j9b9h"] Feb 21 08:26:29 crc kubenswrapper[4820]: I0221 08:26:29.707681 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" path="/var/lib/kubelet/pods/5d104918-3b6f-4543-9ca3-0ae595be78a2/volumes" Feb 21 08:26:30 crc kubenswrapper[4820]: I0221 08:26:30.046607 4820 scope.go:117] "RemoveContainer" containerID="9cc15cd98cb2ee5a66c67dae3b7781ebaef37c8edf2a66d0058beed46e459cfa" Feb 21 08:26:30 crc kubenswrapper[4820]: I0221 08:26:30.070504 4820 scope.go:117] "RemoveContainer" containerID="2ecad43a902d533086cc0d59299eabdf5fed0eb7581600161e0b6b859242cab9" Feb 21 08:26:43 crc kubenswrapper[4820]: I0221 08:26:43.816788 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:26:43 crc kubenswrapper[4820]: I0221 08:26:43.817316 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.049059 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.058921 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6fhr4"] Feb 21 08:26:47 crc kubenswrapper[4820]: I0221 08:26:47.707868 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918975eb-d5b2-4b0e-9b35-36e92f03527b" path="/var/lib/kubelet/pods/918975eb-d5b2-4b0e-9b35-36e92f03527b/volumes" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.693832 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694724 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694737 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694742 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694751 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694757 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694773 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694779 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694788 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694793 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694803 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694808 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694815 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694821 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.694834 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.694839 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695020 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d104918-3b6f-4543-9ca3-0ae595be78a2" containerName="heat-engine" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695036 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695053 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695065 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eba12fb-3cb7-4830-9722-9c9e6ab46002" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695076 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b52673-da5b-421f-be4c-d5608c8d82df" containerName="horizon-log" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695087 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="64841624-ecc9-4a68-b2f8-294f328c7ce3" containerName="heat-api" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="626f6f5d-6222-406e-a687-92b74b1c9def" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: E0221 08:26:54.695305 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.695467 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="422684e4-6de9-44af-9684-9cc724395af6" containerName="heat-cfnapi" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.696615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.698565 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.714615 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849431 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849640 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.849731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.951672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.954621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:54 crc kubenswrapper[4820]: I0221 08:26:54.979257 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.017209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.489331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm"] Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.732889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerStarted","Data":"85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48"} Feb 21 08:26:55 crc kubenswrapper[4820]: I0221 08:26:55.732944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerStarted","Data":"56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7"} Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.052672 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.055277 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.065466 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.173587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275625 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.275893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.276126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.298199 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"redhat-operators-6lcfz\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.375699 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.743155 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48" exitCode=0 Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.743196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"85424f5cdfcfb76e8fff54bef331768051ba60377ac6180166407b6433b8ab48"} Feb 21 08:26:56 crc kubenswrapper[4820]: I0221 08:26:56.870499 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:26:56 crc kubenswrapper[4820]: W0221 08:26:56.873489 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf332fd92_9ed1_4d69_95ec_fcfc12cbd311.slice/crio-f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57 WatchSource:0}: Error finding container f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57: Status 404 returned error can't find the container with id f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57 Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755444 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" exitCode=0 Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0"} Feb 21 08:26:57 crc kubenswrapper[4820]: I0221 08:26:57.755899 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerStarted","Data":"f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57"} Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.775326 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" exitCode=0 Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.775414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389"} Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.777856 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="2e9e7218200e546cc33414ed26da51e8bc2c5aab353917218a694e404cd445fa" exitCode=0 Feb 21 08:26:59 crc kubenswrapper[4820]: I0221 08:26:59.777895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"2e9e7218200e546cc33414ed26da51e8bc2c5aab353917218a694e404cd445fa"} Feb 21 08:27:00 crc kubenswrapper[4820]: I0221 08:27:00.788537 4820 generic.go:334] "Generic (PLEG): container finished" podID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerID="47e76bf7469e806b7b5d94074b78221815a4943a23c3a994ffe77f1f76615bf1" exitCode=0 Feb 21 08:27:00 crc kubenswrapper[4820]: I0221 08:27:00.788610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"47e76bf7469e806b7b5d94074b78221815a4943a23c3a994ffe77f1f76615bf1"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.159247 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192657 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.192795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") pod \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\" (UID: \"f69dedde-7358-4e63-b7b3-cc4ff8c1258e\") " Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.199421 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle" (OuterVolumeSpecName: "bundle") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.203475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util" (OuterVolumeSpecName: "util") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.208730 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt" (OuterVolumeSpecName: "kube-api-access-mn9tt") pod "f69dedde-7358-4e63-b7b3-cc4ff8c1258e" (UID: "f69dedde-7358-4e63-b7b3-cc4ff8c1258e"). InnerVolumeSpecName "kube-api-access-mn9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295344 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9tt\" (UniqueName: \"kubernetes.io/projected/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-kube-api-access-mn9tt\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295388 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.295399 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f69dedde-7358-4e63-b7b3-cc4ff8c1258e-util\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" event={"ID":"f69dedde-7358-4e63-b7b3-cc4ff8c1258e","Type":"ContainerDied","Data":"56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811589 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b26cad348917df612c29e568c13f49ca3e99f4177ad8528f53287a99fbb6a7" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.811314 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm" Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.814002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerStarted","Data":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} Feb 21 08:27:02 crc kubenswrapper[4820]: I0221 08:27:02.847465 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lcfz" podStartSLOduration=3.451932966 podStartE2EDuration="6.847444697s" podCreationTimestamp="2026-02-21 08:26:56 +0000 UTC" firstStartedPulling="2026-02-21 08:26:57.75750161 +0000 UTC m=+5992.790585808" lastFinishedPulling="2026-02-21 08:27:01.153013341 +0000 UTC m=+5996.186097539" observedRunningTime="2026-02-21 08:27:02.841916937 +0000 UTC m=+5997.875001205" watchObservedRunningTime="2026-02-21 08:27:02.847444697 +0000 UTC m=+5997.880528895" Feb 21 08:27:06 crc kubenswrapper[4820]: I0221 08:27:06.376602 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:06 crc kubenswrapper[4820]: I0221 08:27:06.376692 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:07 crc kubenswrapper[4820]: I0221 08:27:07.429369 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lcfz" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" probeResult="failure" output=< Feb 21 08:27:07 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:27:07 crc kubenswrapper[4820]: > Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.288859 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289749 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="util" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="util" Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289783 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289792 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: E0221 08:27:13.289801 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="pull" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289808 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="pull" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.289978 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69dedde-7358-4e63-b7b3-cc4ff8c1258e" containerName="extract" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.290624 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.293617 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-l79dr" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.294347 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.299450 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.371200 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.372351 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.475400 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.476695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.477189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.481636 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.489259 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kv57d" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.507361 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.508780 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.528821 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.535108 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxfs\" (UniqueName: \"kubernetes.io/projected/b371e087-d814-4a0f-9ff3-d55d20e24544-kube-api-access-5qxfs\") pod \"obo-prometheus-operator-68bc856cb9-lw5b9\" (UID: \"b371e087-d814-4a0f-9ff3-d55d20e24544\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.544692 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.579581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.579663 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.608110 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686403 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.686838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.742272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.767799 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.770336 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.774774 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-t2p9k" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.775145 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.784420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c94be0a-30e4-454d-a744-be2161cdbed2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv\" (UID: \"6c94be0a-30e4-454d-a744-be2161cdbed2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.800730 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.800922 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.803089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.826346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a57c79-5f59-4436-802e-2be346a7f24b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-89dc89b99-p7twl\" (UID: \"33a57c79-5f59-4436-802e-2be346a7f24b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840422 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840505 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.840563 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.841774 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.841855 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" gracePeriod=600 Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.867354 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.890479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.904846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:13 crc kubenswrapper[4820]: I0221 08:27:13.904947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.015594 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.015680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.035118 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab6a090-8dce-4a3c-aa4a-467c37f77510-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.036899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.038574 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.044602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqc8q\" (UniqueName: \"kubernetes.io/projected/dab6a090-8dce-4a3c-aa4a-467c37f77510-kube-api-access-kqc8q\") pod \"observability-operator-59bdc8b94-t74mh\" (UID: \"dab6a090-8dce-4a3c-aa4a-467c37f77510\") " pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.045854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6jwr4" Feb 21 08:27:14 crc kubenswrapper[4820]: E0221 08:27:14.064077 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.088255 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.127426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.127498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.230263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.230334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.231602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.239738 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.254017 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b286f\" (UniqueName: \"kubernetes.io/projected/33c8ba11-479e-4bbc-87c4-0d6da77be2eb-kube-api-access-b286f\") pod \"perses-operator-5bf474d74f-m42j5\" (UID: \"33c8ba11-479e-4bbc-87c4-0d6da77be2eb\") " pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.417872 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.538542 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9"] Feb 21 08:27:14 crc kubenswrapper[4820]: I0221 08:27:14.840829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t74mh"] Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.029733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" event={"ID":"b371e087-d814-4a0f-9ff3-d55d20e24544","Type":"ContainerStarted","Data":"7015609bff7430f06327a8ed8bf9116a2e41fa5800d00a2e59f28260896a0992"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.031204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" event={"ID":"dab6a090-8dce-4a3c-aa4a-467c37f77510","Type":"ContainerStarted","Data":"08802462c79e54eee01dc2c372789702e44754a9a562f9db57071148537845a0"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038509 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" exitCode=0 Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8"} Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.038601 4820 scope.go:117] "RemoveContainer" containerID="71790bda67cb32788b4b805eefed34727cfa5df22b69da5f6508ea1c43987bd6" Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.039390 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:15 crc kubenswrapper[4820]: E0221 08:27:15.039682 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.062721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv"] Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.156736 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl"] Feb 21 08:27:15 crc kubenswrapper[4820]: W0221 08:27:15.195225 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33c8ba11_479e_4bbc_87c4_0d6da77be2eb.slice/crio-8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920 WatchSource:0}: Error finding container 8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920: Status 404 returned error can't find the container with id 8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920 Feb 21 08:27:15 crc kubenswrapper[4820]: I0221 08:27:15.197643 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m42j5"] Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.084635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" event={"ID":"6c94be0a-30e4-454d-a744-be2161cdbed2","Type":"ContainerStarted","Data":"367a001a345c6bc9fa61f82a2a495381a595891df22324fa588e7d18241d8d65"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.095701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" event={"ID":"33a57c79-5f59-4436-802e-2be346a7f24b","Type":"ContainerStarted","Data":"4983dc7ff215f46fafdaf55c8453068d55e0623b53c7a09c5b3cb62da5ac5ebe"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.098359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" event={"ID":"33c8ba11-479e-4bbc-87c4-0d6da77be2eb","Type":"ContainerStarted","Data":"8b562735e9fbd3f0a9d2b4f587a2404e872eed4c9394d96139edb51578ab3920"} Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.477574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.599041 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:16 crc kubenswrapper[4820]: I0221 08:27:16.811532 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:18 crc kubenswrapper[4820]: I0221 08:27:18.134078 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lcfz" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" containerID="cri-o://5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" gracePeriod=2 Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.100520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173261 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.173535 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") pod \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\" (UID: \"f332fd92-9ed1-4d69-95ec-fcfc12cbd311\") " Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.174471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities" (OuterVolumeSpecName: "utilities") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196784 4820 generic.go:334] "Generic (PLEG): container finished" podID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" exitCode=0 Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lcfz" event={"ID":"f332fd92-9ed1-4d69-95ec-fcfc12cbd311","Type":"ContainerDied","Data":"f093c6f9044badc9c1e1fb7dc3bec6af3e91999aee22d48558181b058fe99b57"} Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196935 4820 scope.go:117] "RemoveContainer" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.196969 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lcfz" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.199417 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j" (OuterVolumeSpecName: "kube-api-access-6495j") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "kube-api-access-6495j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.264908 4820 scope.go:117] "RemoveContainer" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.277608 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.277661 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6495j\" (UniqueName: \"kubernetes.io/projected/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-kube-api-access-6495j\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.351618 4820 scope.go:117] "RemoveContainer" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.370290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f332fd92-9ed1-4d69-95ec-fcfc12cbd311" (UID: "f332fd92-9ed1-4d69-95ec-fcfc12cbd311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.381445 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f332fd92-9ed1-4d69-95ec-fcfc12cbd311-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.412845 4820 scope.go:117] "RemoveContainer" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.416603 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": container with ID starting with 5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05 not found: ID does not exist" containerID="5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.416699 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05"} err="failed to get container status \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": rpc error: code = NotFound desc = could not find container \"5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05\": container with ID starting with 5e7040f23395bde7ca7dcd93f0ae0ffb233670050c3f3675cd9fd0e63d29de05 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.416724 4820 scope.go:117] "RemoveContainer" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.417283 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": container with ID starting with 55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389 not found: ID does not exist" containerID="55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417331 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389"} err="failed to get container status \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": rpc error: code = NotFound desc = could not find container \"55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389\": container with ID starting with 55ffa18e0c78c5780bb13cb5ed2e1c6a99bf3108c66417b31fec81d77ecd9389 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417369 4820 scope.go:117] "RemoveContainer" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: E0221 08:27:19.417678 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": container with ID starting with fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0 not found: ID does not exist" containerID="fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.417698 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0"} err="failed to get container status \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": rpc error: code = NotFound desc = could not find container \"fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0\": container with ID starting with fa58458e126f09968c9ce82a3bbc9df88f2fcea6840a222fcef0016c22e0a7e0 not found: ID does not exist" Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.548542 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.564572 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lcfz"] Feb 21 08:27:19 crc kubenswrapper[4820]: I0221 08:27:19.721473 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" path="/var/lib/kubelet/pods/f332fd92-9ed1-4d69-95ec-fcfc12cbd311/volumes" Feb 21 08:27:28 crc kubenswrapper[4820]: I0221 08:27:28.696954 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:28 crc kubenswrapper[4820]: E0221 08:27:28.697976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:30 crc kubenswrapper[4820]: I0221 08:27:30.230161 4820 scope.go:117] "RemoveContainer" containerID="8b6311f31356ce76831ef1e643a71519f1d4135a662667153af1b1ec2bf2c1c0" Feb 21 08:27:32 crc kubenswrapper[4820]: I0221 08:27:32.513780 4820 scope.go:117] "RemoveContainer" containerID="ffd6e0717429942441d6739f7446e83992338ace5a92acdad1687015e926114e" Feb 21 08:27:32 crc kubenswrapper[4820]: I0221 08:27:32.559616 4820 scope.go:117] "RemoveContainer" containerID="396aa495a2b94c68ded63dc96a4fdc14015bda68ab667126a1a74b0cac6ba50e" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.341085 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" event={"ID":"33a57c79-5f59-4436-802e-2be346a7f24b","Type":"ContainerStarted","Data":"86a283e87458f5d1a54d6236bb48aabebaff5f0ac07afa684994d780e05d8e05"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.343333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" event={"ID":"33c8ba11-479e-4bbc-87c4-0d6da77be2eb","Type":"ContainerStarted","Data":"468f81479571c143114337829a200469a5c2b406004527517a69a2cb318af9b9"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.343451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.346230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" event={"ID":"dab6a090-8dce-4a3c-aa4a-467c37f77510","Type":"ContainerStarted","Data":"bdfa84ae7fda1bab5214e775f544d97c62b594e23ab2cae7af41dd027a20dfcd"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.346456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.349504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" event={"ID":"6c94be0a-30e4-454d-a744-be2161cdbed2","Type":"ContainerStarted","Data":"3768f8907766bbe5324d9699bf6291d24f5cd455f7c4fd81d1090166fc18cc9a"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.351981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" event={"ID":"b371e087-d814-4a0f-9ff3-d55d20e24544","Type":"ContainerStarted","Data":"8cf336a0ca61eca31ad876313caba0a4ba4afd24a27dd6443be2d2bc47b2fb6e"} Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.364982 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-p7twl" podStartSLOduration=2.992854642 podStartE2EDuration="20.364965988s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.164693366 +0000 UTC m=+6010.197777564" lastFinishedPulling="2026-02-21 08:27:32.536804712 +0000 UTC m=+6027.569888910" observedRunningTime="2026-02-21 08:27:33.361482625 +0000 UTC m=+6028.394566823" watchObservedRunningTime="2026-02-21 08:27:33.364965988 +0000 UTC m=+6028.398050176" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.386953 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lw5b9" podStartSLOduration=2.432265477 podStartE2EDuration="20.38693474s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:14.584168953 +0000 UTC m=+6009.617253151" lastFinishedPulling="2026-02-21 08:27:32.538838206 +0000 UTC m=+6027.571922414" observedRunningTime="2026-02-21 08:27:33.378661347 +0000 UTC m=+6028.411745545" watchObservedRunningTime="2026-02-21 08:27:33.38693474 +0000 UTC m=+6028.420018928" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.390933 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.413034 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" podStartSLOduration=3.095223512 podStartE2EDuration="20.413010603s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.198717744 +0000 UTC m=+6010.231801942" lastFinishedPulling="2026-02-21 08:27:32.516504835 +0000 UTC m=+6027.549589033" observedRunningTime="2026-02-21 08:27:33.40103006 +0000 UTC m=+6028.434114258" watchObservedRunningTime="2026-02-21 08:27:33.413010603 +0000 UTC m=+6028.446094811" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.444711 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv" podStartSLOduration=3.031319778 podStartE2EDuration="20.444691816s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:15.101115613 +0000 UTC m=+6010.134199811" lastFinishedPulling="2026-02-21 08:27:32.514487651 +0000 UTC m=+6027.547571849" observedRunningTime="2026-02-21 08:27:33.443137665 +0000 UTC m=+6028.476221883" watchObservedRunningTime="2026-02-21 08:27:33.444691816 +0000 UTC m=+6028.477776014" Feb 21 08:27:33 crc kubenswrapper[4820]: I0221 08:27:33.478655 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-t74mh" podStartSLOduration=2.767550022 podStartE2EDuration="20.478634841s" podCreationTimestamp="2026-02-21 08:27:13 +0000 UTC" firstStartedPulling="2026-02-21 08:27:14.848584709 +0000 UTC m=+6009.881668907" lastFinishedPulling="2026-02-21 08:27:32.559669528 +0000 UTC m=+6027.592753726" observedRunningTime="2026-02-21 08:27:33.475079746 +0000 UTC m=+6028.508163954" watchObservedRunningTime="2026-02-21 08:27:33.478634841 +0000 UTC m=+6028.511719039" Feb 21 08:27:39 crc kubenswrapper[4820]: I0221 08:27:39.697363 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:39 crc kubenswrapper[4820]: E0221 08:27:39.698204 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:27:44 crc kubenswrapper[4820]: I0221 08:27:44.420855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-m42j5" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.143824 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.144342 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" containerID="cri-o://d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" gracePeriod=2 Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.159478 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.216309 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218343 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218374 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218388 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-content" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218396 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-content" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: E0221 08:27:48.218470 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-utilities" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218480 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="extract-utilities" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218685 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerName="openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.218713 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f332fd92-9ed1-4d69-95ec-fcfc12cbd311" containerName="registry-server" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.219464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.243716 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.270668 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375436 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.375649 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.415278 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.419572 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.427580 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vsmcb" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.434349 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478345 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.478493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.480200 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.489882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.492266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.512835 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"openstackclient\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.555336 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.696924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.750094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"kube-state-metrics-0\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " pod="openstack/kube-state-metrics-0" Feb 21 08:27:48 crc kubenswrapper[4820]: I0221 08:27:48.761564 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.189374 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.191923 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196760 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.196916 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.197177 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-csq7d" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.197410 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.229330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325379 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.325733 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436755 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.436816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.437125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.442725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.443053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.446379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0042658c-e832-4073-894f-78a25bcdb5f9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.446591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0042658c-e832-4073-894f-78a25bcdb5f9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.449071 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.476148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.493230 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dp5q\" (UniqueName: \"kubernetes.io/projected/0042658c-e832-4073-894f-78a25bcdb5f9-kube-api-access-7dp5q\") pod \"alertmanager-metric-storage-0\" (UID: \"0042658c-e832-4073-894f-78a25bcdb5f9\") " pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.538428 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609","Type":"ContainerStarted","Data":"6c917a75194f55f50fe2b10282b30fc6a5cdf4e8c4a51c57a5e5f0dd1fc7b3a1"} Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.553445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.765974 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.780316 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.783482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.785879 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.786116 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.787806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.787990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788120 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788281 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788374 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.788479 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxfvn" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.789831 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962020 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962899 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.962991 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.963140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:49 crc kubenswrapper[4820]: I0221 08:27:49.963266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065314 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065387 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065502 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.065534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.066711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.066982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.067042 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.071876 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.071895 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.072958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.074089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.074516 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.076506 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.076546 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd8dd8437aa8075cd51ba65607a645fc10f7b325eb32cc6b53f399eac5c08fb8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.084997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.127306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.197403 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.231819 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.551591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"7ae037941122768c0878a7ea9d096cb1af11f4eef12dd5a6757839730b130eed"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.553842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609","Type":"ContainerStarted","Data":"7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.555405 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerStarted","Data":"6f3a59fdd346b4bf2cd6317827d2bf8f9f715934794135b9913b0326998f7186"} Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.559879 4820 generic.go:334] "Generic (PLEG): container finished" podID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" containerID="d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" exitCode=137 Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.576748 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.5767278620000003 podStartE2EDuration="2.576727862s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:27:50.567936785 +0000 UTC m=+6045.601020973" watchObservedRunningTime="2026-02-21 08:27:50.576727862 +0000 UTC m=+6045.609812060" Feb 21 08:27:50 crc kubenswrapper[4820]: I0221 08:27:50.655047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:27:51 crc kubenswrapper[4820]: I0221 08:27:51.568536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"8024b89ef022e9ea6ebea26e2dc95ed3b4eeb5984a56c0e3f69c3d705d4bb5c2"} Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.006788 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119933 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.119991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") pod \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\" (UID: \"0690f7f6-8a8e-4c10-92b5-31640a2a46b1\") " Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.128546 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4" (OuterVolumeSpecName: "kube-api-access-nnds4") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "kube-api-access-nnds4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.157581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.169675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.178859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0690f7f6-8a8e-4c10-92b5-31640a2a46b1" (UID: "0690f7f6-8a8e-4c10-92b5-31640a2a46b1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222354 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222388 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222415 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.222423 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnds4\" (UniqueName: \"kubernetes.io/projected/0690f7f6-8a8e-4c10-92b5-31640a2a46b1-kube-api-access-nnds4\") on node \"crc\" DevicePath \"\"" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.585607 4820 scope.go:117] "RemoveContainer" containerID="d2a5a3b2cd722605c77544d2b55b04c162a515d379ad4f861603c967fcd87469" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.585646 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.612977 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" Feb 21 08:27:53 crc kubenswrapper[4820]: I0221 08:27:53.712794 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0690f7f6-8a8e-4c10-92b5-31640a2a46b1" path="/var/lib/kubelet/pods/0690f7f6-8a8e-4c10-92b5-31640a2a46b1/volumes" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.595535 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerStarted","Data":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.596140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.614363 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.369216561 podStartE2EDuration="6.614318973s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="2026-02-21 08:27:49.80292633 +0000 UTC m=+6044.836010528" lastFinishedPulling="2026-02-21 08:27:54.048028742 +0000 UTC m=+6049.081112940" observedRunningTime="2026-02-21 08:27:54.610281104 +0000 UTC m=+6049.643365312" watchObservedRunningTime="2026-02-21 08:27:54.614318973 +0000 UTC m=+6049.647403181" Feb 21 08:27:54 crc kubenswrapper[4820]: I0221 08:27:54.696899 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:27:54 crc kubenswrapper[4820]: E0221 08:27:54.697172 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:00 crc kubenswrapper[4820]: I0221 08:28:00.652591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da"} Feb 21 08:28:01 crc kubenswrapper[4820]: I0221 08:28:01.664433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.697566 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:07 crc kubenswrapper[4820]: E0221 08:28:07.698891 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.725521 4820 generic.go:334] "Generic (PLEG): container finished" podID="0042658c-e832-4073-894f-78a25bcdb5f9" containerID="f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da" exitCode=0 Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.725614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerDied","Data":"f7fcb20c2e1cf93c058ef96cbad6dcec64dbeb08a35e6ebd9ceb4005d27678da"} Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.733655 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" exitCode=0 Feb 21 08:28:07 crc kubenswrapper[4820]: I0221 08:28:07.733706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} Feb 21 08:28:08 crc kubenswrapper[4820]: I0221 08:28:08.766456 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.134181 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.137898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.152755 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.197493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.300379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.300939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.301550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.348136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"certified-operators-px47t\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.459164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.823808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.832641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"06d78d811c45dafd03edd611aefbbf2dada0ef016a3223fc95882794fd36d330"} Feb 21 08:28:14 crc kubenswrapper[4820]: I0221 08:28:14.964762 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:14 crc kubenswrapper[4820]: W0221 08:28:14.973370 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff694654_0a77_4fcd_86a3_af752c869359.slice/crio-3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2 WatchSource:0}: Error finding container 3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2: Status 404 returned error can't find the container with id 3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2 Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.843790 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" exitCode=0 Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.843989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58"} Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.844411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2"} Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.930296 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.932801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:15 crc kubenswrapper[4820]: I0221 08:28:15.942294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.039486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.140727 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.141842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.162599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"community-operators-ffmqr\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.299045 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.828599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:16 crc kubenswrapper[4820]: W0221 08:28:16.828746 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a562943_0b50_4684_bfae_b185088ff6ba.slice/crio-c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3 WatchSource:0}: Error finding container c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3: Status 404 returned error can't find the container with id c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3 Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.855122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} Feb 21 08:28:16 crc kubenswrapper[4820]: I0221 08:28:16.856720 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3"} Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.335711 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.348228 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.386209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.386548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.387132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.413994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.490530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.491341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.492572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.512175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"redhat-marketplace-zwx2p\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.679257 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.866610 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182" exitCode=0 Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.866783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182"} Feb 21 08:28:17 crc kubenswrapper[4820]: I0221 08:28:17.878881 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.390837 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891212 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413" exitCode=0 Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.891603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"8ca2ef79583171317b60a42c2026737ba15070b89b3825567b51aa143c5c24b8"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.899802 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0042658c-e832-4073-894f-78a25bcdb5f9","Type":"ContainerStarted","Data":"04f8db3d84f2c39089205e963ed81f647af5d177ba7c89996c8a2d19d0fcf26e"} Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.900095 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.907754 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 21 08:28:18 crc kubenswrapper[4820]: I0221 08:28:18.946670 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.912060485 podStartE2EDuration="29.946647164s" podCreationTimestamp="2026-02-21 08:27:49 +0000 UTC" firstStartedPulling="2026-02-21 08:27:50.241849218 +0000 UTC m=+6045.274933406" lastFinishedPulling="2026-02-21 08:28:14.276435897 +0000 UTC m=+6069.309520085" observedRunningTime="2026-02-21 08:28:18.938187766 +0000 UTC m=+6073.971271974" watchObservedRunningTime="2026-02-21 08:28:18.946647164 +0000 UTC m=+6073.979731362" Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.919572 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0"} Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.924404 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" exitCode=0 Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.924476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} Feb 21 08:28:19 crc kubenswrapper[4820]: I0221 08:28:19.928832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.980743 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef" exitCode=0 Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.980835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.983621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerStarted","Data":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.984481 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.986142 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0" exitCode=0 Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.986209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0"} Feb 21 08:28:22 crc kubenswrapper[4820]: I0221 08:28:22.988109 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerStarted","Data":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.028544 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-px47t" podStartSLOduration=3.201742345 podStartE2EDuration="9.028526949s" podCreationTimestamp="2026-02-21 08:28:14 +0000 UTC" firstStartedPulling="2026-02-21 08:28:15.845439506 +0000 UTC m=+6070.878523704" lastFinishedPulling="2026-02-21 08:28:21.67222411 +0000 UTC m=+6076.705308308" observedRunningTime="2026-02-21 08:28:23.021665294 +0000 UTC m=+6078.054749502" watchObservedRunningTime="2026-02-21 08:28:23.028526949 +0000 UTC m=+6078.061611147" Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.089355 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.54632791 podStartE2EDuration="35.089335687s" podCreationTimestamp="2026-02-21 08:27:48 +0000 UTC" firstStartedPulling="2026-02-21 08:27:50.666912212 +0000 UTC m=+6045.699996420" lastFinishedPulling="2026-02-21 08:28:22.209919999 +0000 UTC m=+6077.243004197" observedRunningTime="2026-02-21 08:28:23.073569622 +0000 UTC m=+6078.106653820" watchObservedRunningTime="2026-02-21 08:28:23.089335687 +0000 UTC m=+6078.122419875" Feb 21 08:28:23 crc kubenswrapper[4820]: I0221 08:28:23.696913 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:23 crc kubenswrapper[4820]: E0221 08:28:23.697400 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.003026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerStarted","Data":"3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d"} Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.005282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerStarted","Data":"95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9"} Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.040421 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffmqr" podStartSLOduration=3.465512779 podStartE2EDuration="9.040402215s" podCreationTimestamp="2026-02-21 08:28:15 +0000 UTC" firstStartedPulling="2026-02-21 08:28:17.868492391 +0000 UTC m=+6072.901576589" lastFinishedPulling="2026-02-21 08:28:23.443381827 +0000 UTC m=+6078.476466025" observedRunningTime="2026-02-21 08:28:24.027575409 +0000 UTC m=+6079.060659607" watchObservedRunningTime="2026-02-21 08:28:24.040402215 +0000 UTC m=+6079.073486413" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.086677 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwx2p" podStartSLOduration=2.570815123 podStartE2EDuration="7.086652871s" podCreationTimestamp="2026-02-21 08:28:17 +0000 UTC" firstStartedPulling="2026-02-21 08:28:18.894509149 +0000 UTC m=+6073.927593347" lastFinishedPulling="2026-02-21 08:28:23.410346897 +0000 UTC m=+6078.443431095" observedRunningTime="2026-02-21 08:28:24.046212031 +0000 UTC m=+6079.079296229" watchObservedRunningTime="2026-02-21 08:28:24.086652871 +0000 UTC m=+6079.119737069" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.459473 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:24 crc kubenswrapper[4820]: I0221 08:28:24.459731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.053396 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.065757 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.079157 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mkg7q"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.088667 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9c0c-account-create-update-bf5w2"] Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.198025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.504788 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:25 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:25 crc kubenswrapper[4820]: > Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.719800 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3214fb7b-d651-4bd3-a75b-a9995693fc60" path="/var/lib/kubelet/pods/3214fb7b-d651-4bd3-a75b-a9995693fc60/volumes" Feb 21 08:28:25 crc kubenswrapper[4820]: I0221 08:28:25.720645 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19c6e2c-81cf-472e-babb-fb9cf7bf052b" path="/var/lib/kubelet/pods/d19c6e2c-81cf-472e-babb-fb9cf7bf052b/volumes" Feb 21 08:28:26 crc kubenswrapper[4820]: I0221 08:28:26.300133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:26 crc kubenswrapper[4820]: I0221 08:28:26.300467 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.353363 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ffmqr" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:27 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:27 crc kubenswrapper[4820]: > Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.466395 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.473405 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.481161 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.482346 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.488680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605910 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.605935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.606232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.680140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.680198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708630 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708795 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.708937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.709418 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.709509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.723318 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.729134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.729810 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.743262 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.786177 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.788310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"ceilometer-0\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " pod="openstack/ceilometer-0" Feb 21 08:28:27 crc kubenswrapper[4820]: I0221 08:28:27.804817 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:28:28 crc kubenswrapper[4820]: I0221 08:28:28.140484 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:28 crc kubenswrapper[4820]: I0221 08:28:28.470564 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:28:29 crc kubenswrapper[4820]: I0221 08:28:29.066701 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"237af00766cb3ac668153a70322a571c05a31fa10748184013c7aedd5f203ded"} Feb 21 08:28:29 crc kubenswrapper[4820]: I0221 08:28:29.323061 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:30 crc kubenswrapper[4820]: I0221 08:28:30.078475 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwx2p" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" containerID="cri-o://95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" gracePeriod=2 Feb 21 08:28:31 crc kubenswrapper[4820]: I0221 08:28:31.092096 4820 generic.go:334] "Generic (PLEG): container finished" podID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerID="95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" exitCode=0 Feb 21 08:28:31 crc kubenswrapper[4820]: I0221 08:28:31.092199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9"} Feb 21 08:28:32 crc kubenswrapper[4820]: I0221 08:28:32.777052 4820 scope.go:117] "RemoveContainer" containerID="d615c6eccf115a17b159e3c5aa929268d96702d9d0293e623715649c3ad02f08" Feb 21 08:28:33 crc kubenswrapper[4820]: I0221 08:28:33.757651 4820 scope.go:117] "RemoveContainer" containerID="3e52a366c388477e04648e39ebed9de97e6f940db275bcc2bd5bce85d17a210e" Feb 21 08:28:33 crc kubenswrapper[4820]: I0221 08:28:33.916916 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.066952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067272 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067307 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") pod \"de6c7e38-55e9-4696-9f00-f7774a9e1410\" (UID: \"de6c7e38-55e9-4696-9f00-f7774a9e1410\") " Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.067996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities" (OuterVolumeSpecName: "utilities") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.072699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt" (OuterVolumeSpecName: "kube-api-access-bkrlt") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "kube-api-access-bkrlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.083818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de6c7e38-55e9-4696-9f00-f7774a9e1410" (UID: "de6c7e38-55e9-4696-9f00-f7774a9e1410"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127027 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwx2p" event={"ID":"de6c7e38-55e9-4696-9f00-f7774a9e1410","Type":"ContainerDied","Data":"8ca2ef79583171317b60a42c2026737ba15070b89b3825567b51aa143c5c24b8"} Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127125 4820 scope.go:117] "RemoveContainer" containerID="95d16e2b913fed2d61bc0f263eb8f38889071e4b9bdbb9c548ac13518ff990f9" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.127133 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwx2p" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169800 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169841 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c7e38-55e9-4696-9f00-f7774a9e1410-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.169856 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkrlt\" (UniqueName: \"kubernetes.io/projected/de6c7e38-55e9-4696-9f00-f7774a9e1410-kube-api-access-bkrlt\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.178742 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:34 crc kubenswrapper[4820]: I0221 08:28:34.190667 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwx2p"] Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.055880 4820 scope.go:117] "RemoveContainer" containerID="defbd84a660f2b863ed690d803b797edc42cfd55fb3311d91751cdbcc2ec68b0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.079211 4820 scope.go:117] "RemoveContainer" containerID="6e177d00a243df83ea338ac14dee0c063928e5ffb6918fbecb57fd977a88e413" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.198025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.207509 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.509698 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:35 crc kubenswrapper[4820]: > Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.700477 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:35 crc kubenswrapper[4820]: E0221 08:28:35.700696 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:35 crc kubenswrapper[4820]: I0221 08:28:35.713280 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" path="/var/lib/kubelet/pods/de6c7e38-55e9-4696-9f00-f7774a9e1410/volumes" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.162329 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be"} Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.163217 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.363289 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:36 crc kubenswrapper[4820]: I0221 08:28:36.424785 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.157290 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.671447 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.672860 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" containerID="cri-o://7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" gracePeriod=2 Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.682861 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713414 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713785 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-content" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713807 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-content" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713824 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-utilities" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713833 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="extract-utilities" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713848 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713855 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.713866 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.713873 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.714134 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6c7e38-55e9-4696-9f00-f7774a9e1410" containerName="registry-server" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.714161 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerName="openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.715001 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.718743 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.742577 4820 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7bff1f2-af0c-49de-981a-66f57457cc6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-21T08:28:37Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:8419493e1fd846703d277695e03fc5eb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fzqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-21T08:28:37Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.756364 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: E0221 08:28:37.757618 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-6fzqc openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-6fzqc openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.778270 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.788724 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.790337 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.798318 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.799553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.851487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852140 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.852360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954404 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.954652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.955865 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.960855 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-openstack-config-secret\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.969902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c888e608-8215-44cd-a30b-43b1c34b5685-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:37 crc kubenswrapper[4820]: I0221 08:28:37.973064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxjn\" (UniqueName: \"kubernetes.io/projected/c888e608-8215-44cd-a30b-43b1c34b5685-kube-api-access-hfxjn\") pod \"openstackclient\" (UID: \"c888e608-8215-44cd-a30b-43b1c34b5685\") " pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.112649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186609 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e"} Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.186770 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffmqr" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" containerID="cri-o://3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" gracePeriod=2 Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.194714 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.387151 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:38 crc kubenswrapper[4820]: I0221 08:28:38.392660 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:38.709863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.204203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c888e608-8215-44cd-a30b-43b1c34b5685","Type":"ContainerStarted","Data":"ce8979afea330d028535b67341431f21a95db92497e86ee4513ede37f2783e32"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.204564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c888e608-8215-44cd-a30b-43b1c34b5685","Type":"ContainerStarted","Data":"7f1066cbe3db3a9cd4529c4a16baad5d1e0fb040be463ddd6709f5b9deed627e"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.212508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224310 4820 generic.go:334] "Generic (PLEG): container finished" podID="7a562943-0b50-4684-bfae-b185088ff6ba" containerID="3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" exitCode=0 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224416 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.224872 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d"} Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.233271 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.256389 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.259195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.259171905 podStartE2EDuration="2.259171905s" podCreationTimestamp="2026-02-21 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:39.227559382 +0000 UTC m=+6094.260643580" watchObservedRunningTime="2026-02-21 08:28:39.259171905 +0000 UTC m=+6094.292256113" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447557 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" containerID="cri-o://da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447653 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" containerID="cri-o://dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.447809 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" containerID="cri-o://8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" gracePeriod=600 Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.504428 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.596341 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.597967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities" (OuterVolumeSpecName: "utilities") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.598152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.598215 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") pod \"7a562943-0b50-4684-bfae-b185088ff6ba\" (UID: \"7a562943-0b50-4684-bfae-b185088ff6ba\") " Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.599078 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.615478 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4" (OuterVolumeSpecName: "kube-api-access-c8qb4") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "kube-api-access-c8qb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.704875 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8qb4\" (UniqueName: \"kubernetes.io/projected/7a562943-0b50-4684-bfae-b185088ff6ba-kube-api-access-c8qb4\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.706978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a562943-0b50-4684-bfae-b185088ff6ba" (UID: "7a562943-0b50-4684-bfae-b185088ff6ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.714351 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bff1f2-af0c-49de-981a-66f57457cc6d" path="/var/lib/kubelet/pods/b7bff1f2-af0c-49de-981a-66f57457cc6d/volumes" Feb 21 08:28:39 crc kubenswrapper[4820]: I0221 08:28:39.807205 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a562943-0b50-4684-bfae-b185088ff6ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.203394 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240153 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffmqr" event={"ID":"7a562943-0b50-4684-bfae-b185088ff6ba","Type":"ContainerDied","Data":"c6420d40aa16470899c936d5d897eeac108ecf0214d52d9f25b52f68283a6ec3"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240210 4820 scope.go:117] "RemoveContainer" containerID="3bd5e985ee9159378765b592858683ff7721fb499b1997bc7c81039a1009a79d" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.240211 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffmqr" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242945 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242979 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.242990 4820 generic.go:334] "Generic (PLEG): container finished" podID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" exitCode=0 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243097 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.243591 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d23e6c5-673e-4d64-a39e-35e3b09d8d53","Type":"ContainerDied","Data":"8024b89ef022e9ea6ebea26e2dc95ed3b4eeb5984a56c0e3f69c3d705d4bb5c2"} Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.257705 4820 generic.go:334] "Generic (PLEG): container finished" podID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" containerID="7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" exitCode=137 Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.275966 4820 scope.go:117] "RemoveContainer" containerID="d9081c463552fa518c5d538e73ab78a2558358a5c0bb2c4b0449fe1106f5b8ef" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.325966 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326031 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") pod \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\" (UID: \"7d23e6c5-673e-4d64-a39e-35e3b09d8d53\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326381 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.326483 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327216 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327267 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.327906 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.330065 4820 scope.go:117] "RemoveContainer" containerID="85adee0a70bdaf76950c48c8d16085221f5d902409bb24b4d68c953cf5c5a182" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.341286 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config" (OuterVolumeSpecName: "config") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.341413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn" (OuterVolumeSpecName: "kube-api-access-bhfxn") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "kube-api-access-bhfxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.345121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.345264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.353598 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out" (OuterVolumeSpecName: "config-out") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.375800 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffmqr"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.381744 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.393083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config" (OuterVolumeSpecName: "web-config") pod "7d23e6c5-673e-4d64-a39e-35e3b09d8d53" (UID: "7d23e6c5-673e-4d64-a39e-35e3b09d8d53"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430340 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") on node \"crc\" " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430405 4820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-web-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430422 4820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config-out\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430433 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430448 4820 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430485 4820 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430499 4820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.430511 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhfxn\" (UniqueName: \"kubernetes.io/projected/7d23e6c5-673e-4d64-a39e-35e3b09d8d53-kube-api-access-bhfxn\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.477193 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.477382 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8") on node "crc" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.484098 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.542960 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.546023 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.576708 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.598798 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.615252 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.638605 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650000 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650551 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650589 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-content" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650599 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-content" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650611 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650617 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650627 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="init-config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650635 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="init-config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-utilities" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650659 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="extract-utilities" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650672 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650677 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.650689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650696 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650860 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="config-reloader" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650874 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" containerName="registry-server" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650887 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.650902 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="thanos-sidecar" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.654702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.657831 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.659313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.660570 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662190 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.662292 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.665391 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.667632 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxfvn" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.667840 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.679591 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.680299 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.719897 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.720259 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720373 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720451 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.720760 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.720811 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.721055 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721147 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721206 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: E0221 08:28:40.721803 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721835 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.721853 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722342 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722372 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722636 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722661 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.722964 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723023 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723627 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723655 4820 scope.go:117] "RemoveContainer" containerID="8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723930 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9"} err="failed to get container status \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": rpc error: code = NotFound desc = could not find container \"8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9\": container with ID starting with 8373b0a9d7cee394ca68e99e6839a6b44a2d5ebb5da29b219b5d71d69f14d9b9 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.723950 4820 scope.go:117] "RemoveContainer" containerID="dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724186 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15"} err="failed to get container status \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": rpc error: code = NotFound desc = could not find container \"dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15\": container with ID starting with dfa2f8365c52164b409f3b7b54fcdae85cfba5cab7c57e4651d83afdd15a0a15 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724209 4820 scope.go:117] "RemoveContainer" containerID="da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724438 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10"} err="failed to get container status \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": rpc error: code = NotFound desc = could not find container \"da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10\": container with ID starting with da5b6385f6eb74d462ca0b85c797396b58beb575b3ed9d27abbe92b3e88b3f10 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724462 4820 scope.go:117] "RemoveContainer" containerID="6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.724833 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294"} err="failed to get container status \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": rpc error: code = NotFound desc = could not find container \"6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294\": container with ID starting with 6636ef7a7eb4d8b3da901c7fa8c88ac8cf7e145aff6df091665f09a5af079294 not found: ID does not exist" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.746988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") pod \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\" (UID: \"8a308e4a-53cc-4944-ba5b-e1eb6f4fa609\") " Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747724 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747747 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747901 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.747935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748230 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748276 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.748313 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.750997 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t" (OuterVolumeSpecName: "kube-api-access-fjp8t") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "kube-api-access-fjp8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.782419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.784993 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.806722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" (UID: "8a308e4a-53cc-4944-ba5b-e1eb6f4fa609"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.851979 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852149 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852444 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852696 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852722 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852740 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.852755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjp8t\" (UniqueName: \"kubernetes.io/projected/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609-kube-api-access-fjp8t\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.854275 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.855281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.857166 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0c81808a-06e3-4353-b7a6-56ff53f15b69-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.857228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.859956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.861929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.862766 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.862793 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd8dd8437aa8075cd51ba65607a645fc10f7b325eb32cc6b53f399eac5c08fb8/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.863905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.864854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.865185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.866087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c81808a-06e3-4353-b7a6-56ff53f15b69-config\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.866618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c81808a-06e3-4353-b7a6-56ff53f15b69-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.878584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299hp\" (UniqueName: \"kubernetes.io/projected/0c81808a-06e3-4353-b7a6-56ff53f15b69-kube-api-access-299hp\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.911014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff6d63a6-6e01-4d0e-ab31-72de07eeada8\") pod \"prometheus-metric-storage-0\" (UID: \"0c81808a-06e3-4353-b7a6-56ff53f15b69\") " pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:40 crc kubenswrapper[4820]: I0221 08:28:40.974125 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.273343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerStarted","Data":"5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973"} Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.281764 4820 scope.go:117] "RemoveContainer" containerID="7771891f2fa08b33757d032c137a833eef19f7cdb411370f65f1082b88750265" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.281784 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.321162 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" podUID="c888e608-8215-44cd-a30b-43b1c34b5685" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.323783 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.910355341 podStartE2EDuration="14.323757128s" podCreationTimestamp="2026-02-21 08:28:27 +0000 UTC" firstStartedPulling="2026-02-21 08:28:28.476048922 +0000 UTC m=+6083.509133120" lastFinishedPulling="2026-02-21 08:28:39.889450709 +0000 UTC m=+6094.922534907" observedRunningTime="2026-02-21 08:28:41.302849315 +0000 UTC m=+6096.335933523" watchObservedRunningTime="2026-02-21 08:28:41.323757128 +0000 UTC m=+6096.356841326" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.499143 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.708184 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a562943-0b50-4684-bfae-b185088ff6ba" path="/var/lib/kubelet/pods/7a562943-0b50-4684-bfae-b185088ff6ba/volumes" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.709706 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" path="/var/lib/kubelet/pods/7d23e6c5-673e-4d64-a39e-35e3b09d8d53/volumes" Feb 21 08:28:41 crc kubenswrapper[4820]: I0221 08:28:41.710991 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a308e4a-53cc-4944-ba5b-e1eb6f4fa609" path="/var/lib/kubelet/pods/8a308e4a-53cc-4944-ba5b-e1eb6f4fa609/volumes" Feb 21 08:28:42 crc kubenswrapper[4820]: I0221 08:28:42.292778 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"af923841822f57e1220072b03d9d12b984116351ec5d1c5c174e67e0eae729bb"} Feb 21 08:28:42 crc kubenswrapper[4820]: I0221 08:28:42.294575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:28:43 crc kubenswrapper[4820]: I0221 08:28:43.198057 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7d23e6c5-673e-4d64-a39e-35e3b09d8d53" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.128:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.844036 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.846309 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.865854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.878837 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.881715 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.884769 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.937556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.940482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:44 crc kubenswrapper[4820]: I0221 08:28:44.940557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.043546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.044602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.068620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"aodh-db-create-rrxv7\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.147074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.147171 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.148110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.168254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"aodh-1ff2-account-create-update-lcrwl\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.169226 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.221036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.329908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62"} Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.543399 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" probeResult="failure" output=< Feb 21 08:28:45 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:28:45 crc kubenswrapper[4820]: > Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.892195 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:28:45 crc kubenswrapper[4820]: W0221 08:28:45.906869 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b874f59_5a8f_4ecc_8405_4993b1fe7fc2.slice/crio-f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b WatchSource:0}: Error finding container f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b: Status 404 returned error can't find the container with id f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b Feb 21 08:28:45 crc kubenswrapper[4820]: I0221 08:28:45.988079 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:28:45 crc kubenswrapper[4820]: W0221 08:28:45.994762 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e9afae_f779_41ff_af87_712577c90f88.slice/crio-7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0 WatchSource:0}: Error finding container 7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0: Status 404 returned error can't find the container with id 7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0 Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.338904 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerID="0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a" exitCode=0 Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.338972 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerDied","Data":"0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.339003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerStarted","Data":"f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.342219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerStarted","Data":"283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.342275 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerStarted","Data":"7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0"} Feb 21 08:28:46 crc kubenswrapper[4820]: I0221 08:28:46.375519 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-1ff2-account-create-update-lcrwl" podStartSLOduration=2.375498467 podStartE2EDuration="2.375498467s" podCreationTimestamp="2026-02-21 08:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:46.369132056 +0000 UTC m=+6101.402216254" watchObservedRunningTime="2026-02-21 08:28:46.375498467 +0000 UTC m=+6101.408582675" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.350547 4820 generic.go:334] "Generic (PLEG): container finished" podID="c1e9afae-f779-41ff-af87-712577c90f88" containerID="283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe" exitCode=0 Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.350632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerDied","Data":"283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe"} Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.714030 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.822833 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") pod \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.822984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") pod \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\" (UID: \"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2\") " Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.823575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" (UID: "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.824993 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.829403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8" (OuterVolumeSpecName: "kube-api-access-7gnt8") pod "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" (UID: "0b874f59-5a8f-4ecc-8405-4993b1fe7fc2"). InnerVolumeSpecName "kube-api-access-7gnt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:47 crc kubenswrapper[4820]: I0221 08:28:47.927222 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gnt8\" (UniqueName: \"kubernetes.io/projected/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2-kube-api-access-7gnt8\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rrxv7" event={"ID":"0b874f59-5a8f-4ecc-8405-4993b1fe7fc2","Type":"ContainerDied","Data":"f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b"} Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360586 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35881dbf86f6c61a39bcb43fe7cd476679de5f4c62e4558af3fa30d99664b7b" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.360331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rrxv7" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.717824 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.847396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") pod \"c1e9afae-f779-41ff-af87-712577c90f88\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.847704 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") pod \"c1e9afae-f779-41ff-af87-712577c90f88\" (UID: \"c1e9afae-f779-41ff-af87-712577c90f88\") " Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.848156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1e9afae-f779-41ff-af87-712577c90f88" (UID: "c1e9afae-f779-41ff-af87-712577c90f88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.849634 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e9afae-f779-41ff-af87-712577c90f88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.853131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf" (OuterVolumeSpecName: "kube-api-access-hcpgf") pod "c1e9afae-f779-41ff-af87-712577c90f88" (UID: "c1e9afae-f779-41ff-af87-712577c90f88"). InnerVolumeSpecName "kube-api-access-hcpgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:48 crc kubenswrapper[4820]: I0221 08:28:48.951728 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcpgf\" (UniqueName: \"kubernetes.io/projected/c1e9afae-f779-41ff-af87-712577c90f88-kube-api-access-hcpgf\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1ff2-account-create-update-lcrwl" event={"ID":"c1e9afae-f779-41ff-af87-712577c90f88","Type":"ContainerDied","Data":"7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0"} Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370047 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a919ff4e2f49f9ea59e415849f601212c2fa27e24b2837267736c1dfe9539b0" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.370096 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1ff2-account-create-update-lcrwl" Feb 21 08:28:49 crc kubenswrapper[4820]: I0221 08:28:49.696768 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:28:49 crc kubenswrapper[4820]: E0221 08:28:49.697088 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.048476 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.063028 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-l4nch"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.077136 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:50 crc kubenswrapper[4820]: E0221 08:28:50.077823 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.077940 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: E0221 08:28:50.078041 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078103 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078382 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" containerName="mariadb-database-create" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.078453 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e9afae-f779-41ff-af87-712577c90f88" containerName="mariadb-account-create-update" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.079331 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.082425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.082436 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.084922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.085046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.091719 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.178353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.280483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.281347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.287908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.288929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.301372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.324451 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"aodh-db-sync-qk6xf\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.399570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:28:50 crc kubenswrapper[4820]: W0221 08:28:50.895110 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbe35ddb_c3e7_4233_96a1_fe0df9e13f6a.slice/crio-e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252 WatchSource:0}: Error finding container e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252: Status 404 returned error can't find the container with id e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252 Feb 21 08:28:50 crc kubenswrapper[4820]: I0221 08:28:50.896599 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.390896 4820 generic.go:334] "Generic (PLEG): container finished" podID="0c81808a-06e3-4353-b7a6-56ff53f15b69" containerID="abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62" exitCode=0 Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.390973 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerDied","Data":"abe353e6b93e5f762f3bb39ba6e38f0bfa8c49efdf9d9452728ea5771d41ac62"} Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.392722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerStarted","Data":"e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252"} Feb 21 08:28:51 crc kubenswrapper[4820]: I0221 08:28:51.723751 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9668bc3-af3a-43af-8ead-9cc596776786" path="/var/lib/kubelet/pods/f9668bc3-af3a-43af-8ead-9cc596776786/volumes" Feb 21 08:28:52 crc kubenswrapper[4820]: I0221 08:28:52.403702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"8e043988875a6b194029cb4d402f3b8a04c319fd2eefd167e63c2713966e2cf7"} Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.734648 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.796774 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:54 crc kubenswrapper[4820]: I0221 08:28:54.979684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.431258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"38a53d9e5a8a8fb83fe0e1762d74b4301db431fb511089c340616c0bc3dfbb29"} Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.431565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0c81808a-06e3-4353-b7a6-56ff53f15b69","Type":"ContainerStarted","Data":"6f31fe0f4ae9aa9a25ae259c618f133d9f2b6e9150b3c2dd0259349701fa165d"} Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.468882 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.468864425 podStartE2EDuration="15.468864425s" podCreationTimestamp="2026-02-21 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:28:55.459053351 +0000 UTC m=+6110.492137569" watchObservedRunningTime="2026-02-21 08:28:55.468864425 +0000 UTC m=+6110.501948623" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.975405 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.975464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:55 crc kubenswrapper[4820]: I0221 08:28:55.982102 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:56 crc kubenswrapper[4820]: I0221 08:28:56.440277 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-px47t" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" containerID="cri-o://b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" gracePeriod=2 Feb 21 08:28:56 crc kubenswrapper[4820]: I0221 08:28:56.449039 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.037288 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241022 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241416 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.241531 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") pod \"ff694654-0a77-4fcd-86a3-af752c869359\" (UID: \"ff694654-0a77-4fcd-86a3-af752c869359\") " Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.242339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities" (OuterVolumeSpecName: "utilities") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.246876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb" (OuterVolumeSpecName: "kube-api-access-tqgtb") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "kube-api-access-tqgtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.297035 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff694654-0a77-4fcd-86a3-af752c869359" (UID: "ff694654-0a77-4fcd-86a3-af752c869359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344715 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344754 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff694654-0a77-4fcd-86a3-af752c869359-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.344770 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqgtb\" (UniqueName: \"kubernetes.io/projected/ff694654-0a77-4fcd-86a3-af752c869359-kube-api-access-tqgtb\") on node \"crc\" DevicePath \"\"" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453449 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff694654-0a77-4fcd-86a3-af752c869359" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" exitCode=0 Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453506 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px47t" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.453723 4820 scope.go:117] "RemoveContainer" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.454018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px47t" event={"ID":"ff694654-0a77-4fcd-86a3-af752c869359","Type":"ContainerDied","Data":"3e61fae0439d5f606a5f026c6da465f0c1a96e5b3ce271ea0442957ff02140e2"} Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.495575 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.542586 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-px47t"] Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.708972 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff694654-0a77-4fcd-86a3-af752c869359" path="/var/lib/kubelet/pods/ff694654-0a77-4fcd-86a3-af752c869359/volumes" Feb 21 08:28:57 crc kubenswrapper[4820]: I0221 08:28:57.835202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 08:28:59 crc kubenswrapper[4820]: I0221 08:28:59.854911 4820 scope.go:117] "RemoveContainer" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:28:59 crc kubenswrapper[4820]: I0221 08:28:59.880453 4820 scope.go:117] "RemoveContainer" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.014946 4820 scope.go:117] "RemoveContainer" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.015557 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": container with ID starting with b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6 not found: ID does not exist" containerID="b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015599 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6"} err="failed to get container status \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": rpc error: code = NotFound desc = could not find container \"b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6\": container with ID starting with b72ecb2152a136bd9e5d4094fa8075465639254e3702bb08ae358c29797191f6 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015627 4820 scope.go:117] "RemoveContainer" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.015943 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": container with ID starting with acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457 not found: ID does not exist" containerID="acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015969 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457"} err="failed to get container status \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": rpc error: code = NotFound desc = could not find container \"acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457\": container with ID starting with acdc1fadd684e9cdd8f2923c41f407cdc8b84adf558fc0c4ed6d3ba983202457 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.015987 4820 scope.go:117] "RemoveContainer" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: E0221 08:29:00.018017 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": container with ID starting with dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58 not found: ID does not exist" containerID="dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.018044 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58"} err="failed to get container status \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": rpc error: code = NotFound desc = could not find container \"dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58\": container with ID starting with dffb3bdd5c2a378b5b8a86b44326e461c0b4291375f274086aa13ab5deb9ef58 not found: ID does not exist" Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.490341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerStarted","Data":"633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac"} Feb 21 08:29:00 crc kubenswrapper[4820]: I0221 08:29:00.513421 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-qk6xf" podStartSLOduration=1.393496027 podStartE2EDuration="10.51340105s" podCreationTimestamp="2026-02-21 08:28:50 +0000 UTC" firstStartedPulling="2026-02-21 08:28:50.897933393 +0000 UTC m=+6105.931017601" lastFinishedPulling="2026-02-21 08:29:00.017838426 +0000 UTC m=+6115.050922624" observedRunningTime="2026-02-21 08:29:00.509635738 +0000 UTC m=+6115.542719926" watchObservedRunningTime="2026-02-21 08:29:00.51340105 +0000 UTC m=+6115.546485248" Feb 21 08:29:01 crc kubenswrapper[4820]: I0221 08:29:01.522041 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:01 crc kubenswrapper[4820]: I0221 08:29:01.522373 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" containerID="cri-o://4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" gracePeriod=30 Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.044279 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.171374 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") pod \"153a0123-545b-4694-8e22-ef2a97ec9939\" (UID: \"153a0123-545b-4694-8e22-ef2a97ec9939\") " Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.181959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc" (OuterVolumeSpecName: "kube-api-access-5xxlc") pod "153a0123-545b-4694-8e22-ef2a97ec9939" (UID: "153a0123-545b-4694-8e22-ef2a97ec9939"). InnerVolumeSpecName "kube-api-access-5xxlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.273859 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xxlc\" (UniqueName: \"kubernetes.io/projected/153a0123-545b-4694-8e22-ef2a97ec9939-kube-api-access-5xxlc\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509657 4820 generic.go:334] "Generic (PLEG): container finished" podID="153a0123-545b-4694-8e22-ef2a97ec9939" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" exitCode=2 Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerDied","Data":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"153a0123-545b-4694-8e22-ef2a97ec9939","Type":"ContainerDied","Data":"6f3a59fdd346b4bf2cd6317827d2bf8f9f715934794135b9913b0326998f7186"} Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.509998 4820 scope.go:117] "RemoveContainer" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.510098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.531968 4820 scope.go:117] "RemoveContainer" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.532482 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": container with ID starting with 4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1 not found: ID does not exist" containerID="4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.532525 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1"} err="failed to get container status \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": rpc error: code = NotFound desc = could not find container \"4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1\": container with ID starting with 4643cf2b66d113cb6269626a29ebff72ef98c02a5fd60ea5e3d4fed0c752bab1 not found: ID does not exist" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.542142 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.555247 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570191 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570854 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570882 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570907 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570915 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570924 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-content" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570932 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-content" Feb 21 08:29:02 crc kubenswrapper[4820]: E0221 08:29:02.570949 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-utilities" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.570956 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="extract-utilities" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.571204 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" containerName="kube-state-metrics" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.571223 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff694654-0a77-4fcd-86a3-af752c869359" containerName="registry-server" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.574675 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.579263 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.579566 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.618845 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.685865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.685924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.686046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.686107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.787806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.792722 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.794925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.795009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.806288 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-924jz\" (UniqueName: \"kubernetes.io/projected/478142ab-f7fa-4bbd-9051-6d1f5e16a9e2-kube-api-access-924jz\") pod \"kube-state-metrics-0\" (UID: \"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2\") " pod="openstack/kube-state-metrics-0" Feb 21 08:29:02 crc kubenswrapper[4820]: I0221 08:29:02.899217 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 21 08:29:03 crc kubenswrapper[4820]: W0221 08:29:03.374565 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478142ab_f7fa_4bbd_9051_6d1f5e16a9e2.slice/crio-20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3 WatchSource:0}: Error finding container 20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3: Status 404 returned error can't find the container with id 20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.374774 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.519824 4820 generic.go:334] "Generic (PLEG): container finished" podID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerID="633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac" exitCode=0 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.520206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerDied","Data":"633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac"} Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.522709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2","Type":"ContainerStarted","Data":"20bf11746dffbbaaaa0e2aad736170457d25f5063b79c03ae7a10ec2e2b7b6c3"} Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589057 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589382 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" containerID="cri-o://137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589452 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" containerID="cri-o://5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589501 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" containerID="cri-o://dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.589514 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" containerID="cri-o://5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" gracePeriod=30 Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.696973 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:03 crc kubenswrapper[4820]: E0221 08:29:03.697208 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:03 crc kubenswrapper[4820]: I0221 08:29:03.716097 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153a0123-545b-4694-8e22-ef2a97ec9939" path="/var/lib/kubelet/pods/153a0123-545b-4694-8e22-ef2a97ec9939/volumes" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536021 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" exitCode=0 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536378 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" exitCode=2 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536388 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" exitCode=0 Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.536469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.538388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"478142ab-f7fa-4bbd-9051-6d1f5e16a9e2","Type":"ContainerStarted","Data":"d017a2f457f78c82679e61c7b4f8bf88dada53ee987f76e2dd4337fc205ace8b"} Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.538428 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.558947 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7493413389999999 podStartE2EDuration="2.558919364s" podCreationTimestamp="2026-02-21 08:29:02 +0000 UTC" firstStartedPulling="2026-02-21 08:29:03.378122346 +0000 UTC m=+6118.411206544" lastFinishedPulling="2026-02-21 08:29:04.187700371 +0000 UTC m=+6119.220784569" observedRunningTime="2026-02-21 08:29:04.557505536 +0000 UTC m=+6119.590589744" watchObservedRunningTime="2026-02-21 08:29:04.558919364 +0000 UTC m=+6119.592003562" Feb 21 08:29:04 crc kubenswrapper[4820]: I0221 08:29:04.909109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033463 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033551 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.033806 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") pod \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\" (UID: \"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a\") " Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.039369 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts" (OuterVolumeSpecName: "scripts") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.039935 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n" (OuterVolumeSpecName: "kube-api-access-2jk6n") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "kube-api-access-2jk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.063463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.066090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data" (OuterVolumeSpecName: "config-data") pod "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" (UID: "cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136425 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jk6n\" (UniqueName: \"kubernetes.io/projected/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-kube-api-access-2jk6n\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136465 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136474 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.136483 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549044 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qk6xf" event={"ID":"cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a","Type":"ContainerDied","Data":"e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252"} Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549098 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81b9ea598682aec6afdba616cb1ce1a5280dd01172f6b453d9afb8515a2a252" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.549060 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qk6xf" Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.553109 4820 generic.go:334] "Generic (PLEG): container finished" podID="042d4af3-fd72-450a-a2e8-e296886b495a" containerID="5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" exitCode=0 Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.553186 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e"} Feb 21 08:29:05 crc kubenswrapper[4820]: I0221 08:29:05.941611 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.056631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.056982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057138 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.057443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") pod \"042d4af3-fd72-450a-a2e8-e296886b495a\" (UID: \"042d4af3-fd72-450a-a2e8-e296886b495a\") " Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.058321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.059976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.063153 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j" (OuterVolumeSpecName: "kube-api-access-26m8j") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "kube-api-access-26m8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.069735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts" (OuterVolumeSpecName: "scripts") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.088314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.153439 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data" (OuterVolumeSpecName: "config-data") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.154648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "042d4af3-fd72-450a-a2e8-e296886b495a" (UID: "042d4af3-fd72-450a-a2e8-e296886b495a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160266 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160320 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160332 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26m8j\" (UniqueName: \"kubernetes.io/projected/042d4af3-fd72-450a-a2e8-e296886b495a-kube-api-access-26m8j\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160341 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160349 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160357 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042d4af3-fd72-450a-a2e8-e296886b495a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.160368 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042d4af3-fd72-450a-a2e8-e296886b495a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042d4af3-fd72-450a-a2e8-e296886b495a","Type":"ContainerDied","Data":"237af00766cb3ac668153a70322a571c05a31fa10748184013c7aedd5f203ded"} Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565813 4820 scope.go:117] "RemoveContainer" containerID="5d40695bfcfa3209edf5615a34cf423258d7b6777c0a391627147abfb464e973" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.565905 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.596751 4820 scope.go:117] "RemoveContainer" containerID="dc63bc01e75861e72cb0c1d7c880c6b18870394641adb9e882bcc3de7204be7f" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.610589 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.623884 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.624701 4820 scope.go:117] "RemoveContainer" containerID="5fbdf4c2b857c36d154c8be63ea4d0db344d745400f1a3617f7fffb564dcdb5e" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636121 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636670 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636692 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636718 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636727 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636745 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636756 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636778 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636786 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: E0221 08:29:06.636803 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.636813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637046 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-central-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637064 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" containerName="aodh-db-sync" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637085 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="proxy-httpd" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637102 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="sg-core" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.637117 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" containerName="ceilometer-notification-agent" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.639154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643277 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643423 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.643855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.647520 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.664364 4820 scope.go:117] "RemoveContainer" containerID="137708a2f124c1e1d52df3c243cda8bcc10f1d8f0867fea3c2c60d674a9293be" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669487 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669560 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669645 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669694 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.669934 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.772879 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.773597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.777422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778305 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.778689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.779050 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.788546 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"ceilometer-0\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " pod="openstack/ceilometer-0" Feb 21 08:29:06 crc kubenswrapper[4820]: I0221 08:29:06.970196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:07 crc kubenswrapper[4820]: W0221 08:29:07.417224 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf10eadaa_dba6_443d_8cc4_fd1604d40ac1.slice/crio-548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef WatchSource:0}: Error finding container 548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef: Status 404 returned error can't find the container with id 548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.417659 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.585195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef"} Feb 21 08:29:07 crc kubenswrapper[4820]: I0221 08:29:07.710339 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042d4af3-fd72-450a-a2e8-e296886b495a" path="/var/lib/kubelet/pods/042d4af3-fd72-450a-a2e8-e296886b495a/volumes" Feb 21 08:29:08 crc kubenswrapper[4820]: I0221 08:29:08.597308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db"} Feb 21 08:29:08 crc kubenswrapper[4820]: I0221 08:29:08.597731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6"} Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.608803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a"} Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.820304 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.842649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.894289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.895781 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.896276 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.898383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.957523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958078 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958126 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:09 crc kubenswrapper[4820]: I0221 08:29:09.958182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.085710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.093537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.094580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.094694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.108863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"aodh-0\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.280999 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:10 crc kubenswrapper[4820]: I0221 08:29:10.837047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.632998 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerStarted","Data":"1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.633493 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.634784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.634839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"9d9fd65faad6e1c4ee6d0dde3d20356e1e09d8c8be56847a345653bfe39e1ea5"} Feb 21 08:29:11 crc kubenswrapper[4820]: I0221 08:29:11.663866 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.752305153 podStartE2EDuration="5.66384671s" podCreationTimestamp="2026-02-21 08:29:06 +0000 UTC" firstStartedPulling="2026-02-21 08:29:07.41995691 +0000 UTC m=+6122.453041108" lastFinishedPulling="2026-02-21 08:29:10.331498467 +0000 UTC m=+6125.364582665" observedRunningTime="2026-02-21 08:29:11.658948128 +0000 UTC m=+6126.692032366" watchObservedRunningTime="2026-02-21 08:29:11.66384671 +0000 UTC m=+6126.696930908" Feb 21 08:29:12 crc kubenswrapper[4820]: I0221 08:29:12.917831 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.272802 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.378811 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.664296 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" containerID="cri-o://f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60"} Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665554 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" containerID="cri-o://1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665700 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" containerID="cri-o://0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" gracePeriod=30 Feb 21 08:29:13 crc kubenswrapper[4820]: I0221 08:29:13.665823 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" containerID="cri-o://75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" gracePeriod=30 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679140 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" exitCode=0 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679182 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" exitCode=2 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679193 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" exitCode=0 Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.679348 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db"} Feb 21 08:29:14 crc kubenswrapper[4820]: I0221 08:29:14.697713 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:14 crc kubenswrapper[4820]: E0221 08:29:14.698004 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.726950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9"} Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.738690 4820 generic.go:334] "Generic (PLEG): container finished" podID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerID="f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" exitCode=0 Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.738753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6"} Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.875197 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.955716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956144 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956314 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956496 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.956961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.957170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.957827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") pod \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\" (UID: \"f10eadaa-dba6-443d-8cc4-fd1604d40ac1\") " Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958152 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958651 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.958734 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.962188 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts" (OuterVolumeSpecName: "scripts") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.962911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km" (OuterVolumeSpecName: "kube-api-access-pz6km") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "kube-api-access-pz6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:17 crc kubenswrapper[4820]: I0221 08:29:17.991587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.014573 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.040825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060214 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060262 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060273 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060281 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz6km\" (UniqueName: \"kubernetes.io/projected/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-kube-api-access-pz6km\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.060290 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.065752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data" (OuterVolumeSpecName: "config-data") pod "f10eadaa-dba6-443d-8cc4-fd1604d40ac1" (UID: "f10eadaa-dba6-443d-8cc4-fd1604d40ac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.162212 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10eadaa-dba6-443d-8cc4-fd1604d40ac1-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f10eadaa-dba6-443d-8cc4-fd1604d40ac1","Type":"ContainerDied","Data":"548ed111805156a09dd101ff0d5f5513b29da6d0a499b57161ac65c4d61fe4ef"} Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753793 4820 scope.go:117] "RemoveContainer" containerID="1a3c7711b7269297ace25a11d548c2a544aad86593b534338124d236ec2bcd4f" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.753481 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.803004 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.817977 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.831193 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.831729 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.831750 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.831984 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832699 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.832768 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832789 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: E0221 08:29:18.832808 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.832817 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833060 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="sg-core" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-central-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833104 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="ceilometer-notification-agent" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.833117 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" containerName="proxy-httpd" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.843633 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.843735 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.861620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.862887 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.869485 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983266 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983347 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983520 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:18 crc kubenswrapper[4820]: I0221 08:29:18.983655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.085826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.085940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-run-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086834 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.086924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087010 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.087128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.089718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26462812-349d-4dc0-ac4b-3d89ebeb997c-log-httpd\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.099136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-config-data\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.099789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-scripts\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.100893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26462812-349d-4dc0-ac4b-3d89ebeb997c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.115115 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpkg\" (UniqueName: \"kubernetes.io/projected/26462812-349d-4dc0-ac4b-3d89ebeb997c-kube-api-access-sdpkg\") pod \"ceilometer-0\" (UID: \"26462812-349d-4dc0-ac4b-3d89ebeb997c\") " pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.199541 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.511453 4820 scope.go:117] "RemoveContainer" containerID="0788b237a69c54fccab91ec77aae8ace4661556d9cd7128edf8a9304b468ce0a" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.541255 4820 scope.go:117] "RemoveContainer" containerID="75e74cfe50d63cfa3d23eb3c164b3839057766f95257d111da1d1bfc750170db" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.643775 4820 scope.go:117] "RemoveContainer" containerID="f39d4192f8dc890e8590645c040beb632e176cff838f9e68b810d611d1b5e7f6" Feb 21 08:29:19 crc kubenswrapper[4820]: I0221 08:29:19.714092 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10eadaa-dba6-443d-8cc4-fd1604d40ac1" path="/var/lib/kubelet/pods/f10eadaa-dba6-443d-8cc4-fd1604d40ac1/volumes" Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.046598 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.055934 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8fv99"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.599694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 21 08:29:20 crc kubenswrapper[4820]: I0221 08:29:20.784447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"88458e5466698b27a94da126f9321d84278fb0967bc046e146ba87624b508dfe"} Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.031644 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.042533 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9480-account-create-update-bpvlj"] Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.712112 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549ebe18-2d08-41b5-ac23-2321a43dfe38" path="/var/lib/kubelet/pods/549ebe18-2d08-41b5-ac23-2321a43dfe38/volumes" Feb 21 08:29:21 crc kubenswrapper[4820]: I0221 08:29:21.713290 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f96e017-4a70-45ac-9d44-b57829510e53" path="/var/lib/kubelet/pods/8f96e017-4a70-45ac-9d44-b57829510e53/volumes" Feb 21 08:29:22 crc kubenswrapper[4820]: I0221 08:29:22.802392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"2347cc70a093f9c6de2675c539be996f97a6480bbc37adc1f7822b6ae412ea70"} Feb 21 08:29:22 crc kubenswrapper[4820]: I0221 08:29:22.805077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerStarted","Data":"9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf"} Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.811971 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" containerID="cri-o://e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812023 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" containerID="cri-o://9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812045 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" containerID="cri-o://daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.812058 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" containerID="cri-o://5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" gracePeriod=30 Feb 21 08:29:23 crc kubenswrapper[4820]: I0221 08:29:23.845977 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.208344101 podStartE2EDuration="14.84594849s" podCreationTimestamp="2026-02-21 08:29:09 +0000 UTC" firstStartedPulling="2026-02-21 08:29:10.838192021 +0000 UTC m=+6125.871276229" lastFinishedPulling="2026-02-21 08:29:22.47579642 +0000 UTC m=+6137.508880618" observedRunningTime="2026-02-21 08:29:23.831050349 +0000 UTC m=+6138.864134547" watchObservedRunningTime="2026-02-21 08:29:23.84594849 +0000 UTC m=+6138.879032688" Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840167 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" exitCode=0 Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840539 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" exitCode=0 Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60"} Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.840650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5"} Feb 21 08:29:24 crc kubenswrapper[4820]: I0221 08:29:24.843633 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"94594533bd585a6a58306b93861b0d9521919f0ee210a0c108560d7b84cd6ba1"} Feb 21 08:29:28 crc kubenswrapper[4820]: I0221 08:29:28.696968 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:28 crc kubenswrapper[4820]: E0221 08:29:28.697801 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:29 crc kubenswrapper[4820]: I0221 08:29:29.891886 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"d302a721e767d9b472de66d5a5f5d61bb9b34defaa62caf2e3c8972b81687b38"} Feb 21 08:29:30 crc kubenswrapper[4820]: I0221 08:29:30.040811 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:29:30 crc kubenswrapper[4820]: I0221 08:29:30.055347 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v696w"] Feb 21 08:29:32 crc kubenswrapper[4820]: I0221 08:29:32.311383 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffe0144-e67b-4ea7-8212-5989f992997e" path="/var/lib/kubelet/pods/8ffe0144-e67b-4ea7-8212-5989f992997e/volumes" Feb 21 08:29:33 crc kubenswrapper[4820]: I0221 08:29:33.994641 4820 scope.go:117] "RemoveContainer" containerID="d559368b0d2930ebf44224fc90536866334fa2342759e67f4d25212eb003ee23" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.034475 4820 scope.go:117] "RemoveContainer" containerID="5b643310775fbc512d74f27daced1ed65eb8590a166407d6e244cc44ba3b9077" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.252981 4820 scope.go:117] "RemoveContainer" containerID="0c4429cc6df30d2e093692bf4cbd7627086a28c710ac6ad90f897b0cf49fd1d6" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.303740 4820 scope.go:117] "RemoveContainer" containerID="f93d7049647bbd8ed3612333a8d08a0df9aca74de7fd44b0b8ebf76d66d50711" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.950727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26462812-349d-4dc0-ac4b-3d89ebeb997c","Type":"ContainerStarted","Data":"f56913eae09efe6a1d1c4b9a3a343efb64bdcc32cc33cc86a07d6b75b4b4abdb"} Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.951205 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 21 08:29:34 crc kubenswrapper[4820]: I0221 08:29:34.978715 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.553384932 podStartE2EDuration="16.978692463s" podCreationTimestamp="2026-02-21 08:29:18 +0000 UTC" firstStartedPulling="2026-02-21 08:29:20.609246091 +0000 UTC m=+6135.642330289" lastFinishedPulling="2026-02-21 08:29:34.034553622 +0000 UTC m=+6149.067637820" observedRunningTime="2026-02-21 08:29:34.971740156 +0000 UTC m=+6150.004824354" watchObservedRunningTime="2026-02-21 08:29:34.978692463 +0000 UTC m=+6150.011776671" Feb 21 08:29:39 crc kubenswrapper[4820]: I0221 08:29:39.697699 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:39 crc kubenswrapper[4820]: E0221 08:29:39.698683 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:49 crc kubenswrapper[4820]: I0221 08:29:49.213293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.148744 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" exitCode=137 Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.149325 4820 generic.go:334] "Generic (PLEG): container finished" podID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerID="daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" exitCode=137 Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.148944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf"} Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.149371 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9"} Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.386136 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411884 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.411995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.412295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") pod \"38a5221c-e05a-457c-a5d1-5c0404422efb\" (UID: \"38a5221c-e05a-457c-a5d1-5c0404422efb\") " Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.460354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st" (OuterVolumeSpecName: "kube-api-access-9l6st") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "kube-api-access-9l6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.460941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts" (OuterVolumeSpecName: "scripts") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.516868 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-scripts\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.516912 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6st\" (UniqueName: \"kubernetes.io/projected/38a5221c-e05a-457c-a5d1-5c0404422efb-kube-api-access-9l6st\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.544160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.552958 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data" (OuterVolumeSpecName: "config-data") pod "38a5221c-e05a-457c-a5d1-5c0404422efb" (UID: "38a5221c-e05a-457c-a5d1-5c0404422efb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.619037 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.619080 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a5221c-e05a-457c-a5d1-5c0404422efb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:29:54 crc kubenswrapper[4820]: I0221 08:29:54.696382 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:29:54 crc kubenswrapper[4820]: E0221 08:29:54.696843 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"38a5221c-e05a-457c-a5d1-5c0404422efb","Type":"ContainerDied","Data":"9d9fd65faad6e1c4ee6d0dde3d20356e1e09d8c8be56847a345653bfe39e1ea5"} Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160364 4820 scope.go:117] "RemoveContainer" containerID="9965ea29574cc272a65bd3cefce4b12ee4617b4a81b40c790f6d6b20541879cf" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.160062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.188142 4820 scope.go:117] "RemoveContainer" containerID="daf57cacaeae4da4b902644cac1afb13db7f6e91f4829154a92335d8e2720bb9" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.198794 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.208569 4820 scope.go:117] "RemoveContainer" containerID="5c26eca5a269ff7b5e371cbfd29b74050e75b8dfad4d8c863b640758e905ec60" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.215964 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.229404 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230019 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230039 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230056 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230066 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230118 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230127 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: E0221 08:29:55.230137 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230145 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230380 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-notifier" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230403 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-api" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230423 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-evaluator" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.230442 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" containerName="aodh-listener" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.232795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.236854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237011 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-57b2p" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237144 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.237962 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.238147 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.245596 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.251180 4820 scope.go:117] "RemoveContainer" containerID="e9135352833924be297cd8ff1b5a93a442c64c957f9c539dcdc530e2b1a66bc5" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.338832 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.338881 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.339384 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.441982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.442102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.446203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-combined-ca-bundle\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.446655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-config-data\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.448793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-public-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.454844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-scripts\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.455106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77710997-adc1-48de-a5bd-d2e00959d510-internal-tls-certs\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.462063 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsqv\" (UniqueName: \"kubernetes.io/projected/77710997-adc1-48de-a5bd-d2e00959d510-kube-api-access-9tsqv\") pod \"aodh-0\" (UID: \"77710997-adc1-48de-a5bd-d2e00959d510\") " pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.550402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 21 08:29:55 crc kubenswrapper[4820]: I0221 08:29:55.721359 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a5221c-e05a-457c-a5d1-5c0404422efb" path="/var/lib/kubelet/pods/38a5221c-e05a-457c-a5d1-5c0404422efb/volumes" Feb 21 08:29:56 crc kubenswrapper[4820]: I0221 08:29:56.038123 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 21 08:29:56 crc kubenswrapper[4820]: I0221 08:29:56.174961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d29cd41e741c75118675a0b2085bec3983c242922e1153955ba454aa59846ce6"} Feb 21 08:29:57 crc kubenswrapper[4820]: I0221 08:29:57.193948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d52bbb6dff1dad3f9639abd9f75ec1a329eabfd6d40382a99e345c722e43e137"} Feb 21 08:29:57 crc kubenswrapper[4820]: I0221 08:29:57.194351 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"1cf024d079993ee0570e2bceb6693026887e61ade71518c5f06afabf03ab2d9f"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.206984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"d6d1189259787e3b26084b33f304875c89d97bb52f6b34d01017989254e26ebf"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.207390 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"77710997-adc1-48de-a5bd-d2e00959d510","Type":"ContainerStarted","Data":"6f5db994a98dfd6cb0a46770d8ad85bb3777b7c801b6fa47f3c9049cfe541df6"} Feb 21 08:29:58 crc kubenswrapper[4820]: I0221 08:29:58.247758 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.9273835639999999 podStartE2EDuration="3.247733413s" podCreationTimestamp="2026-02-21 08:29:55 +0000 UTC" firstStartedPulling="2026-02-21 08:29:56.03213079 +0000 UTC m=+6171.065214988" lastFinishedPulling="2026-02-21 08:29:57.352480639 +0000 UTC m=+6172.385564837" observedRunningTime="2026-02-21 08:29:58.239493171 +0000 UTC m=+6173.272577389" watchObservedRunningTime="2026-02-21 08:29:58.247733413 +0000 UTC m=+6173.280817611" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.173639 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.178182 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.180131 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.180344 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.192857 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249402 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.249589 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.352750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353074 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.353884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.373795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.380593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"collect-profiles-29527710-h44zw\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.504708 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:00 crc kubenswrapper[4820]: I0221 08:30:00.975410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 08:30:00 crc kubenswrapper[4820]: W0221 08:30:00.976922 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7930cbc_54a2_4fed_8153_27bb0a44221d.slice/crio-3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8 WatchSource:0}: Error finding container 3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8: Status 404 returned error can't find the container with id 3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8 Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.236785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerStarted","Data":"bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf"} Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.237093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerStarted","Data":"3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8"} Feb 21 08:30:01 crc kubenswrapper[4820]: I0221 08:30:01.289825 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" podStartSLOduration=1.289805748 podStartE2EDuration="1.289805748s" podCreationTimestamp="2026-02-21 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:01.274159566 +0000 UTC m=+6176.307243764" watchObservedRunningTime="2026-02-21 08:30:01.289805748 +0000 UTC m=+6176.322889946" Feb 21 08:30:02 crc kubenswrapper[4820]: I0221 08:30:02.247037 4820 generic.go:334] "Generic (PLEG): container finished" podID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerID="bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf" exitCode=0 Feb 21 08:30:02 crc kubenswrapper[4820]: I0221 08:30:02.247136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerDied","Data":"bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf"} Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.632683 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768221 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.768636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") pod \"b7930cbc-54a2-4fed-8153-27bb0a44221d\" (UID: \"b7930cbc-54a2-4fed-8153-27bb0a44221d\") " Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.769424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.770433 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7930cbc-54a2-4fed-8153-27bb0a44221d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.775911 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.776435 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms" (OuterVolumeSpecName: "kube-api-access-7bnms") pod "b7930cbc-54a2-4fed-8153-27bb0a44221d" (UID: "b7930cbc-54a2-4fed-8153-27bb0a44221d"). InnerVolumeSpecName "kube-api-access-7bnms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.874844 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bnms\" (UniqueName: \"kubernetes.io/projected/b7930cbc-54a2-4fed-8153-27bb0a44221d-kube-api-access-7bnms\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:03 crc kubenswrapper[4820]: I0221 08:30:03.874891 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7930cbc-54a2-4fed-8153-27bb0a44221d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268287 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw" event={"ID":"b7930cbc-54a2-4fed-8153-27bb0a44221d","Type":"ContainerDied","Data":"3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8"} Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.268417 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e6b7fbd4360dd59305b5067fc8432310bbf734b1a8a2ede0ed6b2f438e52fc8" Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.712684 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 08:30:04 crc kubenswrapper[4820]: I0221 08:30:04.722918 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527665-bchpb"] Feb 21 08:30:05 crc kubenswrapper[4820]: I0221 08:30:05.710409 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc522f8d-0981-40c6-a17f-c5517c78a9cd" path="/var/lib/kubelet/pods/bc522f8d-0981-40c6-a17f-c5517c78a9cd/volumes" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.620582 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:06 crc kubenswrapper[4820]: E0221 08:30:06.621115 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.621134 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.621433 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" containerName="collect-profiles" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.622758 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.625849 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.634767 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732750 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.732877 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733233 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.733426 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835440 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.835999 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.836734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837321 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.837753 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.865959 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"dnsmasq-dns-847c5cfb7c-qgqzk\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:06 crc kubenswrapper[4820]: I0221 08:30:06.941042 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:07 crc kubenswrapper[4820]: I0221 08:30:07.458447 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.305405 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" exitCode=0 Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.305906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff"} Feb 21 08:30:08 crc kubenswrapper[4820]: I0221 08:30:08.306832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerStarted","Data":"dab5f24bf00cb2fd5017ef99f29db8355c02558ab97cb6a0a73e353ab2a7ff13"} Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.316792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerStarted","Data":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.317069 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.347173 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" podStartSLOduration=3.347143578 podStartE2EDuration="3.347143578s" podCreationTimestamp="2026-02-21 08:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:09.336706756 +0000 UTC m=+6184.369790954" watchObservedRunningTime="2026-02-21 08:30:09.347143578 +0000 UTC m=+6184.380227776" Feb 21 08:30:09 crc kubenswrapper[4820]: I0221 08:30:09.696933 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:09 crc kubenswrapper[4820]: E0221 08:30:09.697182 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:16 crc kubenswrapper[4820]: I0221 08:30:16.943317 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.015036 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.015338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" containerID="cri-o://0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" gracePeriod=10 Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.156617 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.158973 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.181792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211215 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211378 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211488 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.211548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.280697 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.98:5353: connect: connection refused" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313292 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313367 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313448 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.313618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.314463 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.314822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-openstack-cell1\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-dns-svc\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315425 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-config\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.315570 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c431de9-6c4a-4279-a63a-bd6742fc68f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.342618 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjvk\" (UniqueName: \"kubernetes.io/projected/6c431de9-6c4a-4279-a63a-bd6742fc68f0-kube-api-access-xtjvk\") pod \"dnsmasq-dns-6f6dfc499f-dvr9b\" (UID: \"6c431de9-6c4a-4279-a63a-bd6742fc68f0\") " pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.390760 4820 generic.go:334] "Generic (PLEG): container finished" podID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerID="0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" exitCode=0 Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.390811 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8"} Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.536566 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.689062 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.842382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") pod \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\" (UID: \"d22b75bc-f9ca-4b8f-ae95-5d348d367d56\") " Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.867878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f" (OuterVolumeSpecName: "kube-api-access-6lr4f") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "kube-api-access-6lr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.906157 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.907123 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config" (OuterVolumeSpecName: "config") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.912364 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.914389 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d22b75bc-f9ca-4b8f-ae95-5d348d367d56" (UID: "d22b75bc-f9ca-4b8f-ae95-5d348d367d56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944275 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944325 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944337 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944350 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lr4f\" (UniqueName: \"kubernetes.io/projected/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-kube-api-access-6lr4f\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:17 crc kubenswrapper[4820]: I0221 08:30:17.944366 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22b75bc-f9ca-4b8f-ae95-5d348d367d56-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:18 crc kubenswrapper[4820]: W0221 08:30:18.049016 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c431de9_6c4a_4279_a63a_bd6742fc68f0.slice/crio-8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9 WatchSource:0}: Error finding container 8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9: Status 404 returned error can't find the container with id 8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9 Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.053856 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6dfc499f-dvr9b"] Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.407317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.407621 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"8307265b07a6a881b691716ab60e7476d6ece76bad7378a692d664bf88b949d9"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" event={"ID":"d22b75bc-f9ca-4b8f-ae95-5d348d367d56","Type":"ContainerDied","Data":"d67f845d3717911b1815a01ec1fd7dc0df11dc2b02acfc8a168dc3d28d255825"} Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411192 4820 scope.go:117] "RemoveContainer" containerID="0a4720267f768f28f7e592e7fa4dcfc42e1fbbe5a9ed8b90b1f97ebb0060eaf8" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.411216 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b58db4ff-kq4r6" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.471333 4820 scope.go:117] "RemoveContainer" containerID="f215d8f5dd859dfa673e3e2892b1a89b1627e9a6ac4059705534b7571162daeb" Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.478391 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:18 crc kubenswrapper[4820]: I0221 08:30:18.484131 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64b58db4ff-kq4r6"] Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.446498 4820 generic.go:334] "Generic (PLEG): container finished" podID="6c431de9-6c4a-4279-a63a-bd6742fc68f0" containerID="4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2" exitCode=0 Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.446546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerDied","Data":"4aec6550f253350b45648012f31f58129252e80b8f4a077d2ae4a04253b9a5a2"} Feb 21 08:30:19 crc kubenswrapper[4820]: I0221 08:30:19.709903 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" path="/var/lib/kubelet/pods/d22b75bc-f9ca-4b8f-ae95-5d348d367d56/volumes" Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.458993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" event={"ID":"6c431de9-6c4a-4279-a63a-bd6742fc68f0","Type":"ContainerStarted","Data":"bfa8083802a756ad9f9e1dd40034460c68ee5ba8c8c0395850ca2b68518651b7"} Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.459938 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:20 crc kubenswrapper[4820]: I0221 08:30:20.489443 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" podStartSLOduration=3.489425698 podStartE2EDuration="3.489425698s" podCreationTimestamp="2026-02-21 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 08:30:20.479301565 +0000 UTC m=+6195.512385773" watchObservedRunningTime="2026-02-21 08:30:20.489425698 +0000 UTC m=+6195.522509896" Feb 21 08:30:22 crc kubenswrapper[4820]: I0221 08:30:22.697415 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:22 crc kubenswrapper[4820]: E0221 08:30:22.698002 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.539229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6dfc499f-dvr9b" Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.613496 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:27 crc kubenswrapper[4820]: I0221 08:30:27.613781 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" containerID="cri-o://e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" gracePeriod=10 Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.212032 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.284932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285076 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.285331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") pod \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\" (UID: \"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca\") " Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.306759 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx" (OuterVolumeSpecName: "kube-api-access-752kx") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "kube-api-access-752kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.339965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config" (OuterVolumeSpecName: "config") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.347117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.364543 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" (UID: "f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388090 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-752kx\" (UniqueName: \"kubernetes.io/projected/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-kube-api-access-752kx\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388119 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388128 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388138 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-config\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388148 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.388156 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541351 4820 generic.go:334] "Generic (PLEG): container finished" podID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" exitCode=0 Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541413 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541435 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c5cfb7c-qgqzk" event={"ID":"f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca","Type":"ContainerDied","Data":"dab5f24bf00cb2fd5017ef99f29db8355c02558ab97cb6a0a73e353ab2a7ff13"} Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.541490 4820 scope.go:117] "RemoveContainer" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.576585 4820 scope.go:117] "RemoveContainer" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.618741 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.625028 4820 scope.go:117] "RemoveContainer" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: E0221 08:30:28.625970 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": container with ID starting with e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48 not found: ID does not exist" containerID="e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.626009 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48"} err="failed to get container status \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": rpc error: code = NotFound desc = could not find container \"e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48\": container with ID starting with e5ad199b4d355dd34e3e10cc9eabe9aa34d64bfc77909237e79846be54cb9e48 not found: ID does not exist" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.626029 4820 scope.go:117] "RemoveContainer" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: E0221 08:30:28.627497 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": container with ID starting with 748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff not found: ID does not exist" containerID="748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.627523 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff"} err="failed to get container status \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": rpc error: code = NotFound desc = could not find container \"748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff\": container with ID starting with 748e57aceab2841eb1410f854bdd4241d0835beea5b4178c48704374bfeb0fff not found: ID does not exist" Feb 21 08:30:28 crc kubenswrapper[4820]: I0221 08:30:28.648419 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c5cfb7c-qgqzk"] Feb 21 08:30:29 crc kubenswrapper[4820]: I0221 08:30:29.710186 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" path="/var/lib/kubelet/pods/f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca/volumes" Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.045014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.056854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.069400 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.087228 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.098401 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cszw4"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.106510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.114698 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.124210 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rllks"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.133366 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9cdf-account-create-update-r2dfp"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.141370 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9237-account-create-update-4lj2f"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.149353 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7934-account-create-update-tq229"] Feb 21 08:30:30 crc kubenswrapper[4820]: I0221 08:30:30.157603 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-48s57"] Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.709214 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10066581-0763-4940-bcba-cdd983819ef7" path="/var/lib/kubelet/pods/10066581-0763-4940-bcba-cdd983819ef7/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.709946 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a418ce3-1a88-442d-9c0a-3aea9ad0cc51" path="/var/lib/kubelet/pods/1a418ce3-1a88-442d-9c0a-3aea9ad0cc51/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.710549 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245926d7-e415-4af9-b793-9546bb73dc0c" path="/var/lib/kubelet/pods/245926d7-e415-4af9-b793-9546bb73dc0c/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.711102 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b" path="/var/lib/kubelet/pods/77af7b4f-8cbc-42a2-b0ab-fb0f17ef040b/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.712207 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96717fc4-053b-4426-ab50-dc0786c2eb7e" path="/var/lib/kubelet/pods/96717fc4-053b-4426-ab50-dc0786c2eb7e/volumes" Feb 21 08:30:31 crc kubenswrapper[4820]: I0221 08:30:31.712839 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47106ba-9033-418d-a248-6f7ee03d05e6" path="/var/lib/kubelet/pods/e47106ba-9033-418d-a248-6f7ee03d05e6/volumes" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.568525 4820 scope.go:117] "RemoveContainer" containerID="4752965fe12233721da16be2026cb8f90d08c2deaae354b54d275686b6e0952f" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.615388 4820 scope.go:117] "RemoveContainer" containerID="7fef589dd234562a1f8ed9fdd1d4bca07d4fd2cbf607d93270b0548c9a879418" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.676704 4820 scope.go:117] "RemoveContainer" containerID="596a2e41ee647dbd1d667628c46432c71a17e9b1604655abed8696d3d2255d8e" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.722542 4820 scope.go:117] "RemoveContainer" containerID="d2cad300294ab354787d808751187ff2212790e752b7fb9cb18149cc806b0681" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.786267 4820 scope.go:117] "RemoveContainer" containerID="5026a57c2b358309b7948ddf106308e40b701e9677338916048733307f4310bc" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.860273 4820 scope.go:117] "RemoveContainer" containerID="0fea29e38ddb40995e5831792abda163aa5514fd473324369df5f3b8327ea829" Feb 21 08:30:34 crc kubenswrapper[4820]: I0221 08:30:34.919563 4820 scope.go:117] "RemoveContainer" containerID="112dd10479e3747f08f12ee8430488451d124d8475edfb2fee1ed65fd14153d8" Feb 21 08:30:36 crc kubenswrapper[4820]: I0221 08:30:36.697030 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:36 crc kubenswrapper[4820]: E0221 08:30:36.697854 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.908036 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909224 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909351 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909431 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909615 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909684 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: E0221 08:30:37.909750 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.909810 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="init" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910045 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22b75bc-f9ca-4b8f-ae95-5d348d367d56" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910106 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fc5f96-cf9e-401e-ad9d-64ba6e5571ca" containerName="dnsmasq-dns" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.910907 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916203 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916206 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.916907 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.917880 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.918033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.991929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992068 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:37 crc kubenswrapper[4820]: I0221 08:30:37.992095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.093841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.100490 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.101717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.112851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.131745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.229445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:30:38 crc kubenswrapper[4820]: I0221 08:30:38.850631 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n"] Feb 21 08:30:38 crc kubenswrapper[4820]: W0221 08:30:38.856290 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f7b8c5_1ad0_4d18_bf56_89197679507f.slice/crio-3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e WatchSource:0}: Error finding container 3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e: Status 404 returned error can't find the container with id 3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e Feb 21 08:30:39 crc kubenswrapper[4820]: I0221 08:30:39.678018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerStarted","Data":"3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e"} Feb 21 08:30:47 crc kubenswrapper[4820]: I0221 08:30:47.697717 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:30:47 crc kubenswrapper[4820]: E0221 08:30:47.699015 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:30:48 crc kubenswrapper[4820]: I0221 08:30:48.766131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerStarted","Data":"e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9"} Feb 21 08:30:48 crc kubenswrapper[4820]: I0221 08:30:48.804900 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" podStartSLOduration=2.280308076 podStartE2EDuration="11.804880373s" podCreationTimestamp="2026-02-21 08:30:37 +0000 UTC" firstStartedPulling="2026-02-21 08:30:38.85929121 +0000 UTC m=+6213.892375408" lastFinishedPulling="2026-02-21 08:30:48.383863507 +0000 UTC m=+6223.416947705" observedRunningTime="2026-02-21 08:30:48.796982169 +0000 UTC m=+6223.830066367" watchObservedRunningTime="2026-02-21 08:30:48.804880373 +0000 UTC m=+6223.837964571" Feb 21 08:30:54 crc kubenswrapper[4820]: I0221 08:30:54.052363 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:30:54 crc kubenswrapper[4820]: I0221 08:30:54.070479 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kjc5t"] Feb 21 08:30:55 crc kubenswrapper[4820]: I0221 08:30:55.711503 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae13708-c06f-4967-901f-8ea42fdca38c" path="/var/lib/kubelet/pods/2ae13708-c06f-4967-901f-8ea42fdca38c/volumes" Feb 21 08:31:00 crc kubenswrapper[4820]: I0221 08:31:00.697791 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:00 crc kubenswrapper[4820]: E0221 08:31:00.698794 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:01 crc kubenswrapper[4820]: I0221 08:31:01.886811 4820 generic.go:334] "Generic (PLEG): container finished" podID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerID="e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9" exitCode=0 Feb 21 08:31:01 crc kubenswrapper[4820]: I0221 08:31:01.886871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerDied","Data":"e248ae992782d8fdf324b752580f72472acdc6b258d0648b1ae93d9c503903c9"} Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.348375 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448002 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448342 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.448379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") pod \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\" (UID: \"d5f7b8c5-1ad0-4d18-bf56-89197679507f\") " Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.462128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8" (OuterVolumeSpecName: "kube-api-access-8kgd8") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "kube-api-access-8kgd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.462355 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.485228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory" (OuterVolumeSpecName: "inventory") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.485899 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d5f7b8c5-1ad0-4d18-bf56-89197679507f" (UID: "d5f7b8c5-1ad0-4d18-bf56-89197679507f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550737 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kgd8\" (UniqueName: \"kubernetes.io/projected/d5f7b8c5-1ad0-4d18-bf56-89197679507f-kube-api-access-8kgd8\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550775 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550787 4820 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.550800 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5f7b8c5-1ad0-4d18-bf56-89197679507f-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.902993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" event={"ID":"d5f7b8c5-1ad0-4d18-bf56-89197679507f","Type":"ContainerDied","Data":"3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e"} Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.903038 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7c555e7302e3e403cf34f2a5abb17847652b9341f0bf535d5bb4907dc1c37e" Feb 21 08:31:03 crc kubenswrapper[4820]: I0221 08:31:03.903048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.527989 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:10 crc kubenswrapper[4820]: E0221 08:31:10.528927 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.528943 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.529168 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f7b8c5-1ad0-4d18-bf56-89197679507f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.529972 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.532353 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533026 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533203 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.533603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.548478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.711726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.813978 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814218 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.814266 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.820417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.820758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.824389 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.831421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:10 crc kubenswrapper[4820]: I0221 08:31:10.849061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:31:11 crc kubenswrapper[4820]: I0221 08:31:11.448254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk"] Feb 21 08:31:11 crc kubenswrapper[4820]: I0221 08:31:11.975313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerStarted","Data":"bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d"} Feb 21 08:31:12 crc kubenswrapper[4820]: I0221 08:31:12.991513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerStarted","Data":"ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2"} Feb 21 08:31:13 crc kubenswrapper[4820]: I0221 08:31:13.022806 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" podStartSLOduration=2.24938692 podStartE2EDuration="3.0227767s" podCreationTimestamp="2026-02-21 08:31:10 +0000 UTC" firstStartedPulling="2026-02-21 08:31:11.459363621 +0000 UTC m=+6246.492447819" lastFinishedPulling="2026-02-21 08:31:12.232753401 +0000 UTC m=+6247.265837599" observedRunningTime="2026-02-21 08:31:13.010098248 +0000 UTC m=+6248.043182476" watchObservedRunningTime="2026-02-21 08:31:13.0227767 +0000 UTC m=+6248.055860898" Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.042510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.052093 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lwzsj"] Feb 21 08:31:14 crc kubenswrapper[4820]: I0221 08:31:14.696685 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:14 crc kubenswrapper[4820]: E0221 08:31:14.697114 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.029663 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.039867 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wf76m"] Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.730931 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ace6b1-75c4-451e-b167-1dbe9b2471ca" path="/var/lib/kubelet/pods/36ace6b1-75c4-451e-b167-1dbe9b2471ca/volumes" Feb 21 08:31:15 crc kubenswrapper[4820]: I0221 08:31:15.731517 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c86e8d-fde8-46e2-856f-10b3444f1ed7" path="/var/lib/kubelet/pods/52c86e8d-fde8-46e2-856f-10b3444f1ed7/volumes" Feb 21 08:31:25 crc kubenswrapper[4820]: I0221 08:31:25.697456 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:25 crc kubenswrapper[4820]: E0221 08:31:25.701410 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.159894 4820 scope.go:117] "RemoveContainer" containerID="bf29bba5483173a5926ca22d4373cc490219d20fafdd835f34ed3749087c8610" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.192061 4820 scope.go:117] "RemoveContainer" containerID="401aa1cc9b63be74ac5d6945ba27a6f816214705ac3c1915809f5508ba44aa76" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.258229 4820 scope.go:117] "RemoveContainer" containerID="cb3f4ce0b0215a0db2f78f709a8d3c26d681a5c2f85f5e3e4402255224c51737" Feb 21 08:31:35 crc kubenswrapper[4820]: I0221 08:31:35.314585 4820 scope.go:117] "RemoveContainer" containerID="a7ecd295ca0eafe15872121cc6b4a13c28ba3248d670e0f6c5e46ff6c31cdd60" Feb 21 08:31:39 crc kubenswrapper[4820]: I0221 08:31:39.696817 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:39 crc kubenswrapper[4820]: E0221 08:31:39.697436 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:31:52 crc kubenswrapper[4820]: I0221 08:31:52.697435 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:31:52 crc kubenswrapper[4820]: E0221 08:31:52.698959 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.040383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.048821 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5x9p7"] Feb 21 08:32:01 crc kubenswrapper[4820]: I0221 08:32:01.710002 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f525d5cb-a9d6-4121-bf15-1e7af7974e4f" path="/var/lib/kubelet/pods/f525d5cb-a9d6-4121-bf15-1e7af7974e4f/volumes" Feb 21 08:32:04 crc kubenswrapper[4820]: I0221 08:32:04.696935 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:32:04 crc kubenswrapper[4820]: E0221 08:32:04.697523 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:32:15 crc kubenswrapper[4820]: I0221 08:32:15.704865 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:32:16 crc kubenswrapper[4820]: I0221 08:32:16.677818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} Feb 21 08:32:35 crc kubenswrapper[4820]: I0221 08:32:35.451168 4820 scope.go:117] "RemoveContainer" containerID="34a4e1cb1b83b0c97801cf2ba65b4150edc304d737f6d6fdb49f999d85a21849" Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.039260 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.048071 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.057800 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-s4h7q"] Feb 21 08:34:42 crc kubenswrapper[4820]: I0221 08:34:42.066188 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-029a-account-create-update-bm98m"] Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.711196 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84358593-717e-4372-b9bb-28a34fb65b6e" path="/var/lib/kubelet/pods/84358593-717e-4372-b9bb-28a34fb65b6e/volumes" Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.712083 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69513ef-06f3-4770-9e89-5b7b7fe873b2" path="/var/lib/kubelet/pods/d69513ef-06f3-4770-9e89-5b7b7fe873b2/volumes" Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.815927 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:34:43 crc kubenswrapper[4820]: I0221 08:34:43.815981 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:34:58 crc kubenswrapper[4820]: I0221 08:34:58.029514 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:34:58 crc kubenswrapper[4820]: I0221 08:34:58.044287 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-27sgb"] Feb 21 08:34:59 crc kubenswrapper[4820]: I0221 08:34:59.708623 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898015a2-3ff9-4c61-b164-4a6961c44884" path="/var/lib/kubelet/pods/898015a2-3ff9-4c61-b164-4a6961c44884/volumes" Feb 21 08:35:13 crc kubenswrapper[4820]: I0221 08:35:13.815825 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:35:13 crc kubenswrapper[4820]: I0221 08:35:13.816423 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.578412 4820 scope.go:117] "RemoveContainer" containerID="68774d2f4de18b7806f40ee1b0b156252a789383fdca19150a9a891e3ca19dd7" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.619912 4820 scope.go:117] "RemoveContainer" containerID="14af9ba959135f7ccb7c53b58530a4f859881a49edc0cec93b0e45e191a3c245" Feb 21 08:35:35 crc kubenswrapper[4820]: I0221 08:35:35.665744 4820 scope.go:117] "RemoveContainer" containerID="a12df1c2f01a52b23e3ee09bfc109790a329f88bd152cdf89529c2311ee4b560" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.816646 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.818723 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.818916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.820016 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:35:43 crc kubenswrapper[4820]: I0221 08:35:43.820229 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" gracePeriod=600 Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.524758 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" exitCode=0 Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.524835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885"} Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.525134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} Feb 21 08:35:44 crc kubenswrapper[4820]: I0221 08:35:44.525163 4820 scope.go:117] "RemoveContainer" containerID="9d8eb6a806aa44c4bd91200abf946cfc5a7233e0762333ac9542de505a0c40f8" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.295303 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.299589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.320486 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.458193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.458646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.459077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561423 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561478 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.561544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.562041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.562204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.584353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"redhat-operators-8fpfw\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:50 crc kubenswrapper[4820]: I0221 08:37:50.654454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.144744 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.787863 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" exitCode=0 Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.788148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367"} Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.788460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"771767d9cd3d95583fa8ce6ec1ccfb0d4f2276dc7d02d52e89caa08934a3a98f"} Feb 21 08:37:51 crc kubenswrapper[4820]: I0221 08:37:51.792103 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:37:54 crc kubenswrapper[4820]: I0221 08:37:54.817469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} Feb 21 08:38:07 crc kubenswrapper[4820]: I0221 08:38:07.933337 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" exitCode=0 Feb 21 08:38:07 crc kubenswrapper[4820]: I0221 08:38:07.933853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} Feb 21 08:38:09 crc kubenswrapper[4820]: I0221 08:38:09.952549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerStarted","Data":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} Feb 21 08:38:09 crc kubenswrapper[4820]: I0221 08:38:09.972974 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fpfw" podStartSLOduration=2.9298510220000002 podStartE2EDuration="19.972956272s" podCreationTimestamp="2026-02-21 08:37:50 +0000 UTC" firstStartedPulling="2026-02-21 08:37:51.791786996 +0000 UTC m=+6646.824871194" lastFinishedPulling="2026-02-21 08:38:08.834892246 +0000 UTC m=+6663.867976444" observedRunningTime="2026-02-21 08:38:09.971535524 +0000 UTC m=+6665.004619742" watchObservedRunningTime="2026-02-21 08:38:09.972956272 +0000 UTC m=+6665.006040470" Feb 21 08:38:10 crc kubenswrapper[4820]: I0221 08:38:10.655101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:10 crc kubenswrapper[4820]: I0221 08:38:10.655154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:11 crc kubenswrapper[4820]: I0221 08:38:11.701051 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fpfw" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" probeResult="failure" output=< Feb 21 08:38:11 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 08:38:11 crc kubenswrapper[4820]: > Feb 21 08:38:13 crc kubenswrapper[4820]: I0221 08:38:13.816259 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:38:13 crc kubenswrapper[4820]: I0221 08:38:13.817583 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:38:20 crc kubenswrapper[4820]: I0221 08:38:20.714913 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:20 crc kubenswrapper[4820]: I0221 08:38:20.769487 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:21 crc kubenswrapper[4820]: I0221 08:38:21.499082 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.069975 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fpfw" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" containerID="cri-o://511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" gracePeriod=2 Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.672781 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.682845 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") pod \"87e227d7-07f4-4f82-9a8f-0527ec367368\" (UID: \"87e227d7-07f4-4f82-9a8f-0527ec367368\") " Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.683949 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities" (OuterVolumeSpecName: "utilities") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.684290 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.694945 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn" (OuterVolumeSpecName: "kube-api-access-sz9qn") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "kube-api-access-sz9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.785458 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9qn\" (UniqueName: \"kubernetes.io/projected/87e227d7-07f4-4f82-9a8f-0527ec367368-kube-api-access-sz9qn\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.826211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87e227d7-07f4-4f82-9a8f-0527ec367368" (UID: "87e227d7-07f4-4f82-9a8f-0527ec367368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:22 crc kubenswrapper[4820]: I0221 08:38:22.888680 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e227d7-07f4-4f82-9a8f-0527ec367368-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082856 4820 generic.go:334] "Generic (PLEG): container finished" podID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" exitCode=0 Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpfw" event={"ID":"87e227d7-07f4-4f82-9a8f-0527ec367368","Type":"ContainerDied","Data":"771767d9cd3d95583fa8ce6ec1ccfb0d4f2276dc7d02d52e89caa08934a3a98f"} Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.082988 4820 scope.go:117] "RemoveContainer" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.084069 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpfw" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.112637 4820 scope.go:117] "RemoveContainer" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.131330 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.138613 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fpfw"] Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.146407 4820 scope.go:117] "RemoveContainer" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.212373 4820 scope.go:117] "RemoveContainer" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.213076 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": container with ID starting with 511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60 not found: ID does not exist" containerID="511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.213113 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60"} err="failed to get container status \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": rpc error: code = NotFound desc = could not find container \"511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60\": container with ID starting with 511a2a79360779b2eec82901ce45d319fef5edd991d8ef7eb02ca51c15df2d60 not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.213139 4820 scope.go:117] "RemoveContainer" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.213991 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": container with ID starting with 437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f not found: ID does not exist" containerID="437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214075 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f"} err="failed to get container status \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": rpc error: code = NotFound desc = could not find container \"437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f\": container with ID starting with 437d1974e2401c90675ee57ac36d8b23616db2a0337f0bac9c193af030f9bd1f not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214126 4820 scope.go:117] "RemoveContainer" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: E0221 08:38:23.214628 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": container with ID starting with 34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367 not found: ID does not exist" containerID="34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.214674 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367"} err="failed to get container status \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": rpc error: code = NotFound desc = could not find container \"34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367\": container with ID starting with 34b12de872f325093458111ac00a20b84df8b675fb78d9d58fe0dafcae734367 not found: ID does not exist" Feb 21 08:38:23 crc kubenswrapper[4820]: I0221 08:38:23.708292 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" path="/var/lib/kubelet/pods/87e227d7-07f4-4f82-9a8f-0527ec367368/volumes" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913050 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913845 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-content" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913861 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-content" Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913880 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-utilities" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913889 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="extract-utilities" Feb 21 08:38:25 crc kubenswrapper[4820]: E0221 08:38:25.913900 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.913909 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.914115 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e227d7-07f4-4f82-9a8f-0527ec367368" containerName="registry-server" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.915862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:25 crc kubenswrapper[4820]: I0221 08:38:25.925572 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.051938 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.052062 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.052144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.153976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154095 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154252 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.154921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.177887 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"certified-operators-d9pxh\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.249436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:26 crc kubenswrapper[4820]: I0221 08:38:26.760021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:27 crc kubenswrapper[4820]: I0221 08:38:27.124569 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} Feb 21 08:38:27 crc kubenswrapper[4820]: I0221 08:38:27.124969 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"5867d43fa922ffbae98259790d399db7d1b8ab2f0a64c395b0af3a9f3f6b381f"} Feb 21 08:38:28 crc kubenswrapper[4820]: I0221 08:38:28.135809 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" exitCode=0 Feb 21 08:38:28 crc kubenswrapper[4820]: I0221 08:38:28.135867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} Feb 21 08:38:29 crc kubenswrapper[4820]: I0221 08:38:29.147498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} Feb 21 08:38:32 crc kubenswrapper[4820]: I0221 08:38:32.173713 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" exitCode=0 Feb 21 08:38:32 crc kubenswrapper[4820]: I0221 08:38:32.173819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} Feb 21 08:38:33 crc kubenswrapper[4820]: I0221 08:38:33.186875 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerStarted","Data":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} Feb 21 08:38:33 crc kubenswrapper[4820]: I0221 08:38:33.211844 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9pxh" podStartSLOduration=3.589244064 podStartE2EDuration="8.211816418s" podCreationTimestamp="2026-02-21 08:38:25 +0000 UTC" firstStartedPulling="2026-02-21 08:38:28.139134675 +0000 UTC m=+6683.172218873" lastFinishedPulling="2026-02-21 08:38:32.761707029 +0000 UTC m=+6687.794791227" observedRunningTime="2026-02-21 08:38:33.203918605 +0000 UTC m=+6688.237002823" watchObservedRunningTime="2026-02-21 08:38:33.211816418 +0000 UTC m=+6688.244900616" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.250630 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.251196 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:36 crc kubenswrapper[4820]: I0221 08:38:36.299114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:37 crc kubenswrapper[4820]: I0221 08:38:37.269977 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:37 crc kubenswrapper[4820]: I0221 08:38:37.330199 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.238071 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9pxh" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" containerID="cri-o://5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" gracePeriod=2 Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.731010 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.915759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.915935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.916162 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") pod \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\" (UID: \"f0066fa0-f8d5-41f0-9661-d47a8a0e501d\") " Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.916777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities" (OuterVolumeSpecName: "utilities") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.917644 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:39 crc kubenswrapper[4820]: I0221 08:38:39.923575 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm" (OuterVolumeSpecName: "kube-api-access-dn6vm") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "kube-api-access-dn6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.019955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn6vm\" (UniqueName: \"kubernetes.io/projected/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-kube-api-access-dn6vm\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248646 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" exitCode=0 Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9pxh" event={"ID":"f0066fa0-f8d5-41f0-9661-d47a8a0e501d","Type":"ContainerDied","Data":"5867d43fa922ffbae98259790d399db7d1b8ab2f0a64c395b0af3a9f3f6b381f"} Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248740 4820 scope.go:117] "RemoveContainer" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.248881 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9pxh" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.268527 4820 scope.go:117] "RemoveContainer" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.286529 4820 scope.go:117] "RemoveContainer" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.328995 4820 scope.go:117] "RemoveContainer" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.329485 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": container with ID starting with 5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5 not found: ID does not exist" containerID="5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.329524 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5"} err="failed to get container status \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": rpc error: code = NotFound desc = could not find container \"5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5\": container with ID starting with 5848e3845c6321429855add03f83b68d2f6e018bb3efc02a5c471f39d62966c5 not found: ID does not exist" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.329555 4820 scope.go:117] "RemoveContainer" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.330021 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": container with ID starting with 6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e not found: ID does not exist" containerID="6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330073 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e"} err="failed to get container status \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": rpc error: code = NotFound desc = could not find container \"6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e\": container with ID starting with 6c4a61c16fc24cc8002fbc7766e50b569b61c59006d60ae61549e3dafc9c519e not found: ID does not exist" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330108 4820 scope.go:117] "RemoveContainer" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: E0221 08:38:40.330507 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": container with ID starting with 8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22 not found: ID does not exist" containerID="8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22" Feb 21 08:38:40 crc kubenswrapper[4820]: I0221 08:38:40.330546 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22"} err="failed to get container status \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": rpc error: code = NotFound desc = could not find container \"8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22\": container with ID starting with 8a07427668493272e29c987993ec5f353e17a357c579eaa13323e8e7c409bc22 not found: ID does not exist" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.043736 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0066fa0-f8d5-41f0-9661-d47a8a0e501d" (UID: "f0066fa0-f8d5-41f0-9661-d47a8a0e501d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.142749 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0066fa0-f8d5-41f0-9661-d47a8a0e501d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.186080 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.196310 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9pxh"] Feb 21 08:38:41 crc kubenswrapper[4820]: I0221 08:38:41.708597 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" path="/var/lib/kubelet/pods/f0066fa0-f8d5-41f0-9661-d47a8a0e501d/volumes" Feb 21 08:38:43 crc kubenswrapper[4820]: I0221 08:38:43.815948 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:38:43 crc kubenswrapper[4820]: I0221 08:38:43.816315 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:38:48 crc kubenswrapper[4820]: I0221 08:38:48.040078 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:38:48 crc kubenswrapper[4820]: I0221 08:38:48.056256 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-rrxv7"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.039188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.048151 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1ff2-account-create-update-lcrwl"] Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.713813 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b874f59-5a8f-4ecc-8405-4993b1fe7fc2" path="/var/lib/kubelet/pods/0b874f59-5a8f-4ecc-8405-4993b1fe7fc2/volumes" Feb 21 08:38:49 crc kubenswrapper[4820]: I0221 08:38:49.715082 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e9afae-f779-41ff-af87-712577c90f88" path="/var/lib/kubelet/pods/c1e9afae-f779-41ff-af87-712577c90f88/volumes" Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.035178 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.049871 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-qk6xf"] Feb 21 08:39:05 crc kubenswrapper[4820]: I0221 08:39:05.709984 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a" path="/var/lib/kubelet/pods/cbe35ddb-c3e7-4233-96a1-fe0df9e13f6a/volumes" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.815909 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.816534 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.816584 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.817346 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:39:13 crc kubenswrapper[4820]: I0221 08:39:13.817401 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" gracePeriod=600 Feb 21 08:39:14 crc kubenswrapper[4820]: E0221 08:39:14.038112 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545717 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" exitCode=0 Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e"} Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.545825 4820 scope.go:117] "RemoveContainer" containerID="9c0f51850c3ee976a89c7e9ab65d7e1bbc5ca6ad0bc6054c44f678cb78a80885" Feb 21 08:39:14 crc kubenswrapper[4820]: I0221 08:39:14.546706 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:14 crc kubenswrapper[4820]: E0221 08:39:14.547070 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.930176 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931295 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931311 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-content" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931318 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-content" Feb 21 08:39:25 crc kubenswrapper[4820]: E0221 08:39:25.931346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-utilities" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="extract-utilities" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.931584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0066fa0-f8d5-41f0-9661-d47a8a0e501d" containerName="registry-server" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.933964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:25 crc kubenswrapper[4820]: I0221 08:39:25.947570 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075309 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.075472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.177741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.178212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.178217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.205393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"redhat-marketplace-8gpq6\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.255948 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.697264 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:26 crc kubenswrapper[4820]: E0221 08:39:26.698023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:26 crc kubenswrapper[4820]: I0221 08:39:26.843440 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654477 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a" exitCode=0 Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654531 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a"} Feb 21 08:39:27 crc kubenswrapper[4820]: I0221 08:39:27.654798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"70bbf3d2c2578d3d824cbe35868ac37f6ebd2b422ed2d2573ff8688f77bd092e"} Feb 21 08:39:29 crc kubenswrapper[4820]: I0221 08:39:29.673578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b"} Feb 21 08:39:30 crc kubenswrapper[4820]: I0221 08:39:30.683795 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b" exitCode=0 Feb 21 08:39:30 crc kubenswrapper[4820]: I0221 08:39:30.683843 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b"} Feb 21 08:39:32 crc kubenswrapper[4820]: I0221 08:39:32.703913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerStarted","Data":"7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008"} Feb 21 08:39:32 crc kubenswrapper[4820]: I0221 08:39:32.730727 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gpq6" podStartSLOduration=3.297798436 podStartE2EDuration="7.730705849s" podCreationTimestamp="2026-02-21 08:39:25 +0000 UTC" firstStartedPulling="2026-02-21 08:39:27.657755989 +0000 UTC m=+6742.690840187" lastFinishedPulling="2026-02-21 08:39:32.090663402 +0000 UTC m=+6747.123747600" observedRunningTime="2026-02-21 08:39:32.721735058 +0000 UTC m=+6747.754819256" watchObservedRunningTime="2026-02-21 08:39:32.730705849 +0000 UTC m=+6747.763790047" Feb 21 08:39:35 crc kubenswrapper[4820]: I0221 08:39:35.837743 4820 scope.go:117] "RemoveContainer" containerID="0fa05988329236af07673909477dc89b9d1d1084c3a32b7028ed0991a796e02a" Feb 21 08:39:35 crc kubenswrapper[4820]: I0221 08:39:35.947754 4820 scope.go:117] "RemoveContainer" containerID="633afaacce752e65a5261410e5e1ea5326c34bca69f027a178d324465a8a3bac" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.256120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.256466 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.294652 4820 scope.go:117] "RemoveContainer" containerID="283eeb9dc122d4cc0bc63ade7d171e6d57a57e8406097e757d3cb60f5fa2fcfe" Feb 21 08:39:36 crc kubenswrapper[4820]: I0221 08:39:36.300866 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:40 crc kubenswrapper[4820]: I0221 08:39:40.696890 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:40 crc kubenswrapper[4820]: E0221 08:39:40.697433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.981617 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.984025 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:44 crc kubenswrapper[4820]: I0221 08:39:44.993051 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.056744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.056892 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.057055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.159665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.160225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.160344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.181125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"community-operators-zx954\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.325352 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:45 crc kubenswrapper[4820]: I0221 08:39:45.902671 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:39:45 crc kubenswrapper[4820]: W0221 08:39:45.903664 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c219c3_571a_4a37_9baf_065b6ccbf560.slice/crio-b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797 WatchSource:0}: Error finding container b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797: Status 404 returned error can't find the container with id b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797 Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.315912 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842507 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" exitCode=0 Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac"} Feb 21 08:39:46 crc kubenswrapper[4820]: I0221 08:39:46.842603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797"} Feb 21 08:39:47 crc kubenswrapper[4820]: I0221 08:39:47.852309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.559739 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.560304 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gpq6" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" containerID="cri-o://7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" gracePeriod=2 Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.880442 4820 generic.go:334] "Generic (PLEG): container finished" podID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerID="7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" exitCode=0 Feb 21 08:39:48 crc kubenswrapper[4820]: I0221 08:39:48.880550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008"} Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.064617 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.144650 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.144857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.145004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") pod \"56a084d3-5261-4bd8-9d65-ec3b63e30653\" (UID: \"56a084d3-5261-4bd8-9d65-ec3b63e30653\") " Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.150374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2" (OuterVolumeSpecName: "kube-api-access-d2bh2") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "kube-api-access-d2bh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.153137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities" (OuterVolumeSpecName: "utilities") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.179264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56a084d3-5261-4bd8-9d65-ec3b63e30653" (UID: "56a084d3-5261-4bd8-9d65-ec3b63e30653"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247716 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247755 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2bh2\" (UniqueName: \"kubernetes.io/projected/56a084d3-5261-4bd8-9d65-ec3b63e30653-kube-api-access-d2bh2\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.247767 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a084d3-5261-4bd8-9d65-ec3b63e30653-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.930578 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gpq6" event={"ID":"56a084d3-5261-4bd8-9d65-ec3b63e30653","Type":"ContainerDied","Data":"70bbf3d2c2578d3d824cbe35868ac37f6ebd2b422ed2d2573ff8688f77bd092e"} Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.931058 4820 scope.go:117] "RemoveContainer" containerID="7aa50a0869bbbd9f3d0aabcfb3e3fd90360eaa62d72c2439860d75f5821f4008" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.931331 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gpq6" Feb 21 08:39:49 crc kubenswrapper[4820]: I0221 08:39:49.974499 4820 scope.go:117] "RemoveContainer" containerID="9f1fc12197d782422de5899bf8ed0590864432db61ba79b448595bd9f602492b" Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.005778 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.017569 4820 scope.go:117] "RemoveContainer" containerID="4ca5aa4161db0a6f9ced27acbbbefc5782f674112008fea83450ac70043bdd6a" Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.018061 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gpq6"] Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.942444 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" exitCode=0 Feb 21 08:39:50 crc kubenswrapper[4820]: I0221 08:39:50.942656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} Feb 21 08:39:51 crc kubenswrapper[4820]: I0221 08:39:51.696850 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:39:51 crc kubenswrapper[4820]: E0221 08:39:51.697133 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:39:51 crc kubenswrapper[4820]: I0221 08:39:51.707885 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" path="/var/lib/kubelet/pods/56a084d3-5261-4bd8-9d65-ec3b63e30653/volumes" Feb 21 08:39:52 crc kubenswrapper[4820]: I0221 08:39:52.966983 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerStarted","Data":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} Feb 21 08:39:52 crc kubenswrapper[4820]: I0221 08:39:52.990723 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zx954" podStartSLOduration=3.9662401000000003 podStartE2EDuration="8.990699863s" podCreationTimestamp="2026-02-21 08:39:44 +0000 UTC" firstStartedPulling="2026-02-21 08:39:46.844204634 +0000 UTC m=+6761.877288832" lastFinishedPulling="2026-02-21 08:39:51.868664397 +0000 UTC m=+6766.901748595" observedRunningTime="2026-02-21 08:39:52.987490377 +0000 UTC m=+6768.020574595" watchObservedRunningTime="2026-02-21 08:39:52.990699863 +0000 UTC m=+6768.023784061" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.326444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.328518 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:39:55 crc kubenswrapper[4820]: I0221 08:39:55.373566 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:05 crc kubenswrapper[4820]: I0221 08:40:05.373114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:05 crc kubenswrapper[4820]: I0221 08:40:05.424439 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.114183 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zx954" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" containerID="cri-o://88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" gracePeriod=2 Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.580552 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.626466 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") pod \"10c219c3-571a-4a37-9baf-065b6ccbf560\" (UID: \"10c219c3-571a-4a37-9baf-065b6ccbf560\") " Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.627457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities" (OuterVolumeSpecName: "utilities") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.634459 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc" (OuterVolumeSpecName: "kube-api-access-xg9wc") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "kube-api-access-xg9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.692257 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10c219c3-571a-4a37-9baf-065b6ccbf560" (UID: "10c219c3-571a-4a37-9baf-065b6ccbf560"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.696674 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:06 crc kubenswrapper[4820]: E0221 08:40:06.697057 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728495 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728548 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9wc\" (UniqueName: \"kubernetes.io/projected/10c219c3-571a-4a37-9baf-065b6ccbf560-kube-api-access-xg9wc\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:06 crc kubenswrapper[4820]: I0221 08:40:06.728562 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c219c3-571a-4a37-9baf-065b6ccbf560-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124101 4820 generic.go:334] "Generic (PLEG): container finished" podID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" exitCode=0 Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zx954" event={"ID":"10c219c3-571a-4a37-9baf-065b6ccbf560","Type":"ContainerDied","Data":"b3b5baf5e7a2517a28f184c97764a96449aa1935f2eb5fae1d120ab8928cf797"} Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zx954" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.124203 4820 scope.go:117] "RemoveContainer" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.149525 4820 scope.go:117] "RemoveContainer" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.165276 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.174094 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zx954"] Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.189658 4820 scope.go:117] "RemoveContainer" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.214607 4820 scope.go:117] "RemoveContainer" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.214984 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": container with ID starting with 88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32 not found: ID does not exist" containerID="88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215022 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32"} err="failed to get container status \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": rpc error: code = NotFound desc = could not find container \"88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32\": container with ID starting with 88beb01ac9175183a358a4e66a6804d055e98f7558dc5c775b96812d562f0f32 not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215048 4820 scope.go:117] "RemoveContainer" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.215407 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": container with ID starting with bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f not found: ID does not exist" containerID="bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215435 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f"} err="failed to get container status \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": rpc error: code = NotFound desc = could not find container \"bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f\": container with ID starting with bff934a32907c6bdc5cf3861f0fa06d42af3e6519443eb80038255b2c9a92e8f not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215449 4820 scope.go:117] "RemoveContainer" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: E0221 08:40:07.215844 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": container with ID starting with ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac not found: ID does not exist" containerID="ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.215878 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac"} err="failed to get container status \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": rpc error: code = NotFound desc = could not find container \"ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac\": container with ID starting with ecbf68ed91b6e0fa54c088cd28120dbd6b26caa4d9fb39abcbad46d0c4a6dfac not found: ID does not exist" Feb 21 08:40:07 crc kubenswrapper[4820]: I0221 08:40:07.708401 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" path="/var/lib/kubelet/pods/10c219c3-571a-4a37-9baf-065b6ccbf560/volumes" Feb 21 08:40:18 crc kubenswrapper[4820]: I0221 08:40:18.697064 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:18 crc kubenswrapper[4820]: E0221 08:40:18.697765 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:33 crc kubenswrapper[4820]: I0221 08:40:33.696863 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:33 crc kubenswrapper[4820]: E0221 08:40:33.697758 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:40:45 crc kubenswrapper[4820]: I0221 08:40:45.705099 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:40:45 crc kubenswrapper[4820]: E0221 08:40:45.706184 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:00 crc kubenswrapper[4820]: I0221 08:41:00.697403 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:00 crc kubenswrapper[4820]: E0221 08:41:00.698185 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:12 crc kubenswrapper[4820]: I0221 08:41:12.696590 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:12 crc kubenswrapper[4820]: E0221 08:41:12.698209 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:14 crc kubenswrapper[4820]: I0221 08:41:14.697003 4820 generic.go:334] "Generic (PLEG): container finished" podID="8acec915-5e23-4212-9bce-50fec475c433" containerID="ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2" exitCode=0 Feb 21 08:41:14 crc kubenswrapper[4820]: I0221 08:41:14.697081 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerDied","Data":"ec1a4c393a9121270be39171bf2da08c8a063040bae700684bfcf3b9d8f4d3c2"} Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.145112 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.200407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.200518 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.201322 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.201393 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") pod \"8acec915-5e23-4212-9bce-50fec475c433\" (UID: \"8acec915-5e23-4212-9bce-50fec475c433\") " Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.205714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8" (OuterVolumeSpecName: "kube-api-access-zxkx8") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "kube-api-access-zxkx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.206117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.226642 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.227880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory" (OuterVolumeSpecName: "inventory") pod "8acec915-5e23-4212-9bce-50fec475c433" (UID: "8acec915-5e23-4212-9bce-50fec475c433"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303566 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxkx8\" (UniqueName: \"kubernetes.io/projected/8acec915-5e23-4212-9bce-50fec475c433-kube-api-access-zxkx8\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303612 4820 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303624 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.303634 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8acec915-5e23-4212-9bce-50fec475c433-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" event={"ID":"8acec915-5e23-4212-9bce-50fec475c433","Type":"ContainerDied","Data":"bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d"} Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714856 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfde1cc6b595c74e965cbaa1483573efc5b20ea19714eb3b49b85d75603b542d" Feb 21 08:41:16 crc kubenswrapper[4820]: I0221 08:41:16.714867 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk" Feb 21 08:41:25 crc kubenswrapper[4820]: I0221 08:41:25.704786 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:25 crc kubenswrapper[4820]: E0221 08:41:25.706798 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.772595 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773361 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773381 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773396 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773404 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773429 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773454 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773461 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="extract-content" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773481 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773488 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="extract-utilities" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773507 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773514 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: E0221 08:41:29.773528 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773535 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773743 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a084d3-5261-4bd8-9d65-ec3b63e30653" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773764 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c219c3-571a-4a37-9baf-065b6ccbf560" containerName="registry-server" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.773790 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acec915-5e23-4212-9bce-50fec475c433" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.774667 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782754 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782757 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.782873 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.783020 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.787802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.886682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.988879 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.994882 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.995431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:29 crc kubenswrapper[4820]: I0221 08:41:29.995561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.010005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"bootstrap-openstack-openstack-cell1-h8h82\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.094099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.670024 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-h8h82"] Feb 21 08:41:30 crc kubenswrapper[4820]: I0221 08:41:30.830649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerStarted","Data":"939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7"} Feb 21 08:41:31 crc kubenswrapper[4820]: I0221 08:41:31.842368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerStarted","Data":"5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2"} Feb 21 08:41:31 crc kubenswrapper[4820]: I0221 08:41:31.867666 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" podStartSLOduration=2.426498957 podStartE2EDuration="2.867591793s" podCreationTimestamp="2026-02-21 08:41:29 +0000 UTC" firstStartedPulling="2026-02-21 08:41:30.671170242 +0000 UTC m=+6865.704254440" lastFinishedPulling="2026-02-21 08:41:31.112263078 +0000 UTC m=+6866.145347276" observedRunningTime="2026-02-21 08:41:31.864662494 +0000 UTC m=+6866.897746702" watchObservedRunningTime="2026-02-21 08:41:31.867591793 +0000 UTC m=+6866.900675991" Feb 21 08:41:40 crc kubenswrapper[4820]: I0221 08:41:40.696965 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:40 crc kubenswrapper[4820]: E0221 08:41:40.697785 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:41:53 crc kubenswrapper[4820]: I0221 08:41:53.697785 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:41:53 crc kubenswrapper[4820]: E0221 08:41:53.698422 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:04 crc kubenswrapper[4820]: I0221 08:42:04.697098 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:04 crc kubenswrapper[4820]: E0221 08:42:04.697874 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:17 crc kubenswrapper[4820]: I0221 08:42:17.696621 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:17 crc kubenswrapper[4820]: E0221 08:42:17.697328 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:28 crc kubenswrapper[4820]: I0221 08:42:28.696558 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:28 crc kubenswrapper[4820]: E0221 08:42:28.697304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:39 crc kubenswrapper[4820]: I0221 08:42:39.696729 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:39 crc kubenswrapper[4820]: E0221 08:42:39.699148 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:42:54 crc kubenswrapper[4820]: I0221 08:42:54.697001 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:42:54 crc kubenswrapper[4820]: E0221 08:42:54.697914 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:05 crc kubenswrapper[4820]: I0221 08:43:05.707041 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:05 crc kubenswrapper[4820]: E0221 08:43:05.708044 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:20 crc kubenswrapper[4820]: I0221 08:43:20.697780 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:20 crc kubenswrapper[4820]: E0221 08:43:20.700410 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:35 crc kubenswrapper[4820]: I0221 08:43:35.708358 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:35 crc kubenswrapper[4820]: E0221 08:43:35.710896 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:43:46 crc kubenswrapper[4820]: I0221 08:43:46.697910 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:43:46 crc kubenswrapper[4820]: E0221 08:43:46.699187 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:00 crc kubenswrapper[4820]: I0221 08:44:00.697277 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:00 crc kubenswrapper[4820]: E0221 08:44:00.698757 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:13 crc kubenswrapper[4820]: I0221 08:44:13.697027 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:13 crc kubenswrapper[4820]: E0221 08:44:13.697889 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:44:28 crc kubenswrapper[4820]: I0221 08:44:28.697576 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:44:29 crc kubenswrapper[4820]: I0221 08:44:29.486190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} Feb 21 08:44:43 crc kubenswrapper[4820]: I0221 08:44:43.602120 4820 generic.go:334] "Generic (PLEG): container finished" podID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerID="5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2" exitCode=0 Feb 21 08:44:43 crc kubenswrapper[4820]: I0221 08:44:43.602277 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerDied","Data":"5ceefbffe3a7e6b50bbc1012002add2d32abc6f4e6711dc8443fbd09e19e6cf2"} Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.133008 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.230978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.231653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") pod \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\" (UID: \"b328f114-e2a2-4fe6-9e6d-bf8a99364733\") " Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.236589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.236813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r" (OuterVolumeSpecName: "kube-api-access-v978r") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "kube-api-access-v978r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.261023 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory" (OuterVolumeSpecName: "inventory") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.265523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b328f114-e2a2-4fe6-9e6d-bf8a99364733" (UID: "b328f114-e2a2-4fe6-9e6d-bf8a99364733"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334375 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v978r\" (UniqueName: \"kubernetes.io/projected/b328f114-e2a2-4fe6-9e6d-bf8a99364733-kube-api-access-v978r\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334408 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334419 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.334427 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b328f114-e2a2-4fe6-9e6d-bf8a99364733-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.624876 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" event={"ID":"b328f114-e2a2-4fe6-9e6d-bf8a99364733","Type":"ContainerDied","Data":"939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7"} Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.625295 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939a170ca8bc8c0b04bcd9b59224d97aba76382b2f158866460750f35f9310c7" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.624957 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-h8h82" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724114 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:45 crc kubenswrapper[4820]: E0221 08:44:45.724519 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724538 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.724788 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b328f114-e2a2-4fe6-9e6d-bf8a99364733" containerName="bootstrap-openstack-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.726655 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.729476 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731645 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731815 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.731973 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.736381 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846541 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.846686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.949193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.950175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.950203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.953636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.954104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:45 crc kubenswrapper[4820]: I0221 08:44:45.969798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"download-cache-openstack-openstack-cell1-bdzjs\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.051704 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.578516 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.602178 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-bdzjs"] Feb 21 08:44:46 crc kubenswrapper[4820]: I0221 08:44:46.637093 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerStarted","Data":"d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18"} Feb 21 08:44:47 crc kubenswrapper[4820]: I0221 08:44:47.649279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerStarted","Data":"1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b"} Feb 21 08:44:47 crc kubenswrapper[4820]: I0221 08:44:47.678777 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" podStartSLOduration=2.185582396 podStartE2EDuration="2.67875432s" podCreationTimestamp="2026-02-21 08:44:45 +0000 UTC" firstStartedPulling="2026-02-21 08:44:46.57825853 +0000 UTC m=+7061.611342728" lastFinishedPulling="2026-02-21 08:44:47.071430454 +0000 UTC m=+7062.104514652" observedRunningTime="2026-02-21 08:44:47.668696436 +0000 UTC m=+7062.701780634" watchObservedRunningTime="2026-02-21 08:44:47.67875432 +0000 UTC m=+7062.711838518" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.144525 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.146671 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.148872 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.154980 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.173948 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251538 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.251774 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.353934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.354009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.354100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.355369 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.370016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.370112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"collect-profiles-29527725-k4st8\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.475400 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:00 crc kubenswrapper[4820]: I0221 08:45:00.911185 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 08:45:00 crc kubenswrapper[4820]: W0221 08:45:00.921378 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode44294e9_1a1b_421f_bed6_f72a8bb45e1d.slice/crio-46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995 WatchSource:0}: Error finding container 46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995: Status 404 returned error can't find the container with id 46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995 Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.767350 4820 generic.go:334] "Generic (PLEG): container finished" podID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerID="d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2" exitCode=0 Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.767407 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerDied","Data":"d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2"} Feb 21 08:45:01 crc kubenswrapper[4820]: I0221 08:45:01.768726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerStarted","Data":"46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995"} Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.100580 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211662 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.211890 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.212431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") pod \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\" (UID: \"e44294e9-1a1b-421f-bed6-f72a8bb45e1d\") " Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.214318 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.218407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz" (OuterVolumeSpecName: "kube-api-access-lfrhz") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "kube-api-access-lfrhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.218422 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e44294e9-1a1b-421f-bed6-f72a8bb45e1d" (UID: "e44294e9-1a1b-421f-bed6-f72a8bb45e1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.316661 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.316704 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrhz\" (UniqueName: \"kubernetes.io/projected/e44294e9-1a1b-421f-bed6-f72a8bb45e1d-kube-api-access-lfrhz\") on node \"crc\" DevicePath \"\"" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791308 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" event={"ID":"e44294e9-1a1b-421f-bed6-f72a8bb45e1d","Type":"ContainerDied","Data":"46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995"} Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791660 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46758fe39c9534dfe81354a9889199d074af41843e3d4382ff1a479d09568995" Feb 21 08:45:03 crc kubenswrapper[4820]: I0221 08:45:03.791566 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8" Feb 21 08:45:04 crc kubenswrapper[4820]: I0221 08:45:04.194016 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:45:04 crc kubenswrapper[4820]: I0221 08:45:04.203032 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527680-vvkw5"] Feb 21 08:45:05 crc kubenswrapper[4820]: I0221 08:45:05.708810 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053b4929-8cfe-48ef-b6ab-d57fa3eeebc1" path="/var/lib/kubelet/pods/053b4929-8cfe-48ef-b6ab-d57fa3eeebc1/volumes" Feb 21 08:45:36 crc kubenswrapper[4820]: I0221 08:45:36.739965 4820 scope.go:117] "RemoveContainer" containerID="2044ba44e2360265584b1f1c99572b402737919ae46c5dc3430e7ebdb548610f" Feb 21 08:46:39 crc kubenswrapper[4820]: I0221 08:46:39.759583 4820 generic.go:334] "Generic (PLEG): container finished" podID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerID="1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b" exitCode=0 Feb 21 08:46:39 crc kubenswrapper[4820]: I0221 08:46:39.759626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerDied","Data":"1d4f0c693534689f47073c82a3834b39e5d861ea6be879a690c33b30f6e2157b"} Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.240220 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.354909 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") pod \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\" (UID: \"26d06bf4-eb66-4688-a6ba-292af8a3b9f5\") " Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.361006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l" (OuterVolumeSpecName: "kube-api-access-z5d9l") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "kube-api-access-z5d9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.386200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory" (OuterVolumeSpecName: "inventory") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.405518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "26d06bf4-eb66-4688-a6ba-292af8a3b9f5" (UID: "26d06bf4-eb66-4688-a6ba-292af8a3b9f5"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457185 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457259 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5d9l\" (UniqueName: \"kubernetes.io/projected/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-kube-api-access-z5d9l\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.457278 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d06bf4-eb66-4688-a6ba-292af8a3b9f5-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" event={"ID":"26d06bf4-eb66-4688-a6ba-292af8a3b9f5","Type":"ContainerDied","Data":"d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18"} Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778722 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d304e4893e4a4885f85f938002c8c75a860b24a24145fb223319d0af26918d18" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.778735 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-bdzjs" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.870670 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:41 crc kubenswrapper[4820]: E0221 08:46:41.871154 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871165 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: E0221 08:46:41.871195 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871201 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871400 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d06bf4-eb66-4688-a6ba-292af8a3b9f5" containerName="download-cache-openstack-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.871427 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" containerName="collect-profiles" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.873132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.875393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.876824 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.877330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.878185 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.888863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:41 crc kubenswrapper[4820]: I0221 08:46:41.967748 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.069940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.070075 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.070128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.074793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.075089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.087469 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"configure-network-openstack-openstack-cell1-hs6l2\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.197157 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.762001 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hs6l2"] Feb 21 08:46:42 crc kubenswrapper[4820]: I0221 08:46:42.799004 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerStarted","Data":"0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634"} Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.824931 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.825327 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.845663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerStarted","Data":"20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26"} Feb 21 08:46:43 crc kubenswrapper[4820]: I0221 08:46:43.876506 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" podStartSLOduration=2.477238859 podStartE2EDuration="2.876486043s" podCreationTimestamp="2026-02-21 08:46:41 +0000 UTC" firstStartedPulling="2026-02-21 08:46:42.775076909 +0000 UTC m=+7177.808161107" lastFinishedPulling="2026-02-21 08:46:43.174324093 +0000 UTC m=+7178.207408291" observedRunningTime="2026-02-21 08:46:43.865413832 +0000 UTC m=+7178.898498050" watchObservedRunningTime="2026-02-21 08:46:43.876486043 +0000 UTC m=+7178.909570241" Feb 21 08:47:13 crc kubenswrapper[4820]: I0221 08:47:13.816159 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:47:13 crc kubenswrapper[4820]: I0221 08:47:13.816840 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.816535 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.817256 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.817359 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.819029 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:47:43 crc kubenswrapper[4820]: I0221 08:47:43.819136 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" gracePeriod=600 Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.362694 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" exitCode=0 Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.362780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380"} Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.363102 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} Feb 21 08:47:44 crc kubenswrapper[4820]: I0221 08:47:44.363128 4820 scope.go:117] "RemoveContainer" containerID="5e32394ae5e19283293ba42bcbc41810906fc34634bef843d04a4b6e883ffd0e" Feb 21 08:48:04 crc kubenswrapper[4820]: I0221 08:48:04.527978 4820 generic.go:334] "Generic (PLEG): container finished" podID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerID="20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26" exitCode=0 Feb 21 08:48:04 crc kubenswrapper[4820]: I0221 08:48:04.528094 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerDied","Data":"20ee5d0e5e3482aa3c105d1b9a82837c1f018d6d0c88b9b65f165c4065adfb26"} Feb 21 08:48:05 crc kubenswrapper[4820]: I0221 08:48:05.948399 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.046604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.047553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.047856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") pod \"979ca93e-175b-4fde-b503-0be2b59e1a99\" (UID: \"979ca93e-175b-4fde-b503-0be2b59e1a99\") " Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.052734 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4" (OuterVolumeSpecName: "kube-api-access-x5cr4") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "kube-api-access-x5cr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.077852 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.079228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory" (OuterVolumeSpecName: "inventory") pod "979ca93e-175b-4fde-b503-0be2b59e1a99" (UID: "979ca93e-175b-4fde-b503-0be2b59e1a99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151254 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151295 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5cr4\" (UniqueName: \"kubernetes.io/projected/979ca93e-175b-4fde-b503-0be2b59e1a99-kube-api-access-x5cr4\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.151306 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/979ca93e-175b-4fde-b503-0be2b59e1a99-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" event={"ID":"979ca93e-175b-4fde-b503-0be2b59e1a99","Type":"ContainerDied","Data":"0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634"} Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545712 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0626e3e35aa773bddeddff1597472bda541b62311bf3ef021539021b08131634" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.545715 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hs6l2" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.632935 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:06 crc kubenswrapper[4820]: E0221 08:48:06.634142 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.634170 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.634482 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="979ca93e-175b-4fde-b503-0be2b59e1a99" containerName="configure-network-openstack-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.635377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.645308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.646277 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.650841 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.650948 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.651651 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.660815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.660889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.661005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.764945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.782867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.783551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.787231 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"validate-network-openstack-openstack-cell1-wn9jn\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:06 crc kubenswrapper[4820]: I0221 08:48:06.954822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:07 crc kubenswrapper[4820]: I0221 08:48:07.445088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wn9jn"] Feb 21 08:48:07 crc kubenswrapper[4820]: I0221 08:48:07.554091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerStarted","Data":"2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc"} Feb 21 08:48:08 crc kubenswrapper[4820]: I0221 08:48:08.566173 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerStarted","Data":"4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3"} Feb 21 08:48:08 crc kubenswrapper[4820]: I0221 08:48:08.586430 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" podStartSLOduration=1.9522718110000001 podStartE2EDuration="2.586411624s" podCreationTimestamp="2026-02-21 08:48:06 +0000 UTC" firstStartedPulling="2026-02-21 08:48:07.450040841 +0000 UTC m=+7262.483125039" lastFinishedPulling="2026-02-21 08:48:08.084180654 +0000 UTC m=+7263.117264852" observedRunningTime="2026-02-21 08:48:08.585282693 +0000 UTC m=+7263.618366891" watchObservedRunningTime="2026-02-21 08:48:08.586411624 +0000 UTC m=+7263.619495822" Feb 21 08:48:13 crc kubenswrapper[4820]: I0221 08:48:13.625455 4820 generic.go:334] "Generic (PLEG): container finished" podID="15b9de10-7535-4310-9681-2d0171fb4376" containerID="4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3" exitCode=0 Feb 21 08:48:13 crc kubenswrapper[4820]: I0221 08:48:13.625561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerDied","Data":"4a2ac9ecaa31b83ea0c818951ff0576244db3281516c0c93689b4103b96d80e3"} Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.062923 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237122 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.237634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") pod \"15b9de10-7535-4310-9681-2d0171fb4376\" (UID: \"15b9de10-7535-4310-9681-2d0171fb4376\") " Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.242458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9" (OuterVolumeSpecName: "kube-api-access-vmhm9") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "kube-api-access-vmhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.264834 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.279299 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory" (OuterVolumeSpecName: "inventory") pod "15b9de10-7535-4310-9681-2d0171fb4376" (UID: "15b9de10-7535-4310-9681-2d0171fb4376"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.341145 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.342012 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15b9de10-7535-4310-9681-2d0171fb4376-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.342230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmhm9\" (UniqueName: \"kubernetes.io/projected/15b9de10-7535-4310-9681-2d0171fb4376-kube-api-access-vmhm9\") on node \"crc\" DevicePath \"\"" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648383 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" event={"ID":"15b9de10-7535-4310-9681-2d0171fb4376","Type":"ContainerDied","Data":"2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc"} Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648428 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eddcb66d437c45ef856273c0eff221fd08e4e75074809b6014ffeabb08633fc" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.648449 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wn9jn" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.720989 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:15 crc kubenswrapper[4820]: E0221 08:48:15.721468 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.721487 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.721670 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b9de10-7535-4310-9681-2d0171fb4376" containerName="validate-network-openstack-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.722419 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.725633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.725955 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.726558 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.726640 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.740391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887495 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.887666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989063 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.989284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.993132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:15 crc kubenswrapper[4820]: I0221 08:48:15.994021 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.008041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"install-os-openstack-openstack-cell1-79fjr\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.049480 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.577545 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-79fjr"] Feb 21 08:48:16 crc kubenswrapper[4820]: I0221 08:48:16.659354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerStarted","Data":"41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f"} Feb 21 08:48:17 crc kubenswrapper[4820]: I0221 08:48:17.671057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerStarted","Data":"66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a"} Feb 21 08:48:17 crc kubenswrapper[4820]: I0221 08:48:17.696883 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-79fjr" podStartSLOduration=2.306038769 podStartE2EDuration="2.696861825s" podCreationTimestamp="2026-02-21 08:48:15 +0000 UTC" firstStartedPulling="2026-02-21 08:48:16.584296667 +0000 UTC m=+7271.617380865" lastFinishedPulling="2026-02-21 08:48:16.975119723 +0000 UTC m=+7272.008203921" observedRunningTime="2026-02-21 08:48:17.690727089 +0000 UTC m=+7272.723811287" watchObservedRunningTime="2026-02-21 08:48:17.696861825 +0000 UTC m=+7272.729946033" Feb 21 08:49:01 crc kubenswrapper[4820]: I0221 08:49:01.103506 4820 generic.go:334] "Generic (PLEG): container finished" podID="8f2548bf-793b-464b-9659-2962669f353e" containerID="66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a" exitCode=0 Feb 21 08:49:01 crc kubenswrapper[4820]: I0221 08:49:01.103638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerDied","Data":"66e80841080b53e2e1c62ade5f863a181bd7fdddca1e53ca515fd528a4e40c3a"} Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.514093 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.639759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.640004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.640117 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") pod \"8f2548bf-793b-464b-9659-2962669f353e\" (UID: \"8f2548bf-793b-464b-9659-2962669f353e\") " Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.646032 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t" (OuterVolumeSpecName: "kube-api-access-wcd5t") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "kube-api-access-wcd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.668602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.669831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory" (OuterVolumeSpecName: "inventory") pod "8f2548bf-793b-464b-9659-2962669f353e" (UID: "8f2548bf-793b-464b-9659-2962669f353e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742716 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742762 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f2548bf-793b-464b-9659-2962669f353e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:02 crc kubenswrapper[4820]: I0221 08:49:02.742773 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcd5t\" (UniqueName: \"kubernetes.io/projected/8f2548bf-793b-464b-9659-2962669f353e-kube-api-access-wcd5t\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.122983 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-79fjr" event={"ID":"8f2548bf-793b-464b-9659-2962669f353e","Type":"ContainerDied","Data":"41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f"} Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.123022 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a04bba9543a00bbd50c7dee3ccd6277a5fb10cd6b514d8903b92b7bb9d627f" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.123026 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-79fjr" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.203390 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:03 crc kubenswrapper[4820]: E0221 08:49:03.203884 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.203910 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.204116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2548bf-793b-464b-9659-2962669f353e" containerName="install-os-openstack-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.205330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207539 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207585 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.207842 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.218742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354153 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.354575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.457219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.462334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.462936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.474910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"configure-os-openstack-openstack-cell1-cggfb\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:03 crc kubenswrapper[4820]: I0221 08:49:03.528519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:04 crc kubenswrapper[4820]: I0221 08:49:04.070566 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-cggfb"] Feb 21 08:49:04 crc kubenswrapper[4820]: I0221 08:49:04.137790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerStarted","Data":"d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529"} Feb 21 08:49:05 crc kubenswrapper[4820]: I0221 08:49:05.147703 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerStarted","Data":"0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1"} Feb 21 08:49:05 crc kubenswrapper[4820]: I0221 08:49:05.188743 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" podStartSLOduration=1.7963288290000001 podStartE2EDuration="2.188723046s" podCreationTimestamp="2026-02-21 08:49:03 +0000 UTC" firstStartedPulling="2026-02-21 08:49:04.073780454 +0000 UTC m=+7319.106864662" lastFinishedPulling="2026-02-21 08:49:04.466174671 +0000 UTC m=+7319.499258879" observedRunningTime="2026-02-21 08:49:05.181199742 +0000 UTC m=+7320.214283930" watchObservedRunningTime="2026-02-21 08:49:05.188723046 +0000 UTC m=+7320.221807244" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.781261 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.783865 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.797030 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.830765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.831294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.831399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.933849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.934795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.935035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:21 crc kubenswrapper[4820]: I0221 08:49:21.953349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"certified-operators-926lq\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:22 crc kubenswrapper[4820]: I0221 08:49:22.109969 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:22 crc kubenswrapper[4820]: I0221 08:49:22.612333 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.326879 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" exitCode=0 Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.326976 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f"} Feb 21 08:49:23 crc kubenswrapper[4820]: I0221 08:49:23.327192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"d7ed5ad8e0089012d5d820d3260cef995279ba3d115ae6ade6cca75392a8e9c5"} Feb 21 08:49:24 crc kubenswrapper[4820]: I0221 08:49:24.338399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} Feb 21 08:49:26 crc kubenswrapper[4820]: I0221 08:49:26.359127 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" exitCode=0 Feb 21 08:49:26 crc kubenswrapper[4820]: I0221 08:49:26.359202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} Feb 21 08:49:27 crc kubenswrapper[4820]: I0221 08:49:27.370523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerStarted","Data":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} Feb 21 08:49:27 crc kubenswrapper[4820]: I0221 08:49:27.388053 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-926lq" podStartSLOduration=2.990070545 podStartE2EDuration="6.388034935s" podCreationTimestamp="2026-02-21 08:49:21 +0000 UTC" firstStartedPulling="2026-02-21 08:49:23.328714243 +0000 UTC m=+7338.361798441" lastFinishedPulling="2026-02-21 08:49:26.726678633 +0000 UTC m=+7341.759762831" observedRunningTime="2026-02-21 08:49:27.385625821 +0000 UTC m=+7342.418710039" watchObservedRunningTime="2026-02-21 08:49:27.388034935 +0000 UTC m=+7342.421119143" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.110925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.111219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.158899 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.487531 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:32 crc kubenswrapper[4820]: I0221 08:49:32.542428 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:34 crc kubenswrapper[4820]: I0221 08:49:34.441678 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-926lq" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" containerID="cri-o://89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" gracePeriod=2 Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.038097 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118754 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118844 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.118865 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") pod \"3596f53c-dfdd-4e87-95db-35af3c55ea47\" (UID: \"3596f53c-dfdd-4e87-95db-35af3c55ea47\") " Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.120319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities" (OuterVolumeSpecName: "utilities") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.125514 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54" (OuterVolumeSpecName: "kube-api-access-r6n54") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "kube-api-access-r6n54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.175266 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3596f53c-dfdd-4e87-95db-35af3c55ea47" (UID: "3596f53c-dfdd-4e87-95db-35af3c55ea47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.220924 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6n54\" (UniqueName: \"kubernetes.io/projected/3596f53c-dfdd-4e87-95db-35af3c55ea47-kube-api-access-r6n54\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.221169 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.221183 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3596f53c-dfdd-4e87-95db-35af3c55ea47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454113 4820 generic.go:334] "Generic (PLEG): container finished" podID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" exitCode=0 Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454218 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-926lq" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454279 4820 scope.go:117] "RemoveContainer" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.454232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-926lq" event={"ID":"3596f53c-dfdd-4e87-95db-35af3c55ea47","Type":"ContainerDied","Data":"d7ed5ad8e0089012d5d820d3260cef995279ba3d115ae6ade6cca75392a8e9c5"} Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.479992 4820 scope.go:117] "RemoveContainer" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.504538 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.513637 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-926lq"] Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.523564 4820 scope.go:117] "RemoveContainer" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.557585 4820 scope.go:117] "RemoveContainer" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.560867 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": container with ID starting with 89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a not found: ID does not exist" containerID="89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.560919 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a"} err="failed to get container status \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": rpc error: code = NotFound desc = could not find container \"89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a\": container with ID starting with 89506bda263ea70fbfa7f71d5b4afd65de97968fc870a38d1c2a53e3e32c737a not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.560950 4820 scope.go:117] "RemoveContainer" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.561645 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": container with ID starting with 460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565 not found: ID does not exist" containerID="460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.561773 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565"} err="failed to get container status \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": rpc error: code = NotFound desc = could not find container \"460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565\": container with ID starting with 460a65c8894b2efd27683543484d7d44412bcbcfa154f87f60f93d456cb92565 not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.561789 4820 scope.go:117] "RemoveContainer" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: E0221 08:49:35.562089 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": container with ID starting with d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f not found: ID does not exist" containerID="d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.562115 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f"} err="failed to get container status \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": rpc error: code = NotFound desc = could not find container \"d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f\": container with ID starting with d8f979a6c0c66c07b20771b4257be7c9f9f3d6f4542c6eb4703f9f2a64625b6f not found: ID does not exist" Feb 21 08:49:35 crc kubenswrapper[4820]: I0221 08:49:35.712642 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" path="/var/lib/kubelet/pods/3596f53c-dfdd-4e87-95db-35af3c55ea47/volumes" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.805422 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806376 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806390 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806426 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-content" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806432 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-content" Feb 21 08:49:37 crc kubenswrapper[4820]: E0221 08:49:37.806445 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-utilities" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806450 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="extract-utilities" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.806658 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3596f53c-dfdd-4e87-95db-35af3c55ea47" containerName="registry-server" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.808292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.819218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877655 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877879 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.877917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980736 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.980985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:37 crc kubenswrapper[4820]: I0221 08:49:37.981414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.003432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"redhat-operators-56bw7\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.147082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:38 crc kubenswrapper[4820]: I0221 08:49:38.628057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491066 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" exitCode=0 Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491117 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430"} Feb 21 08:49:39 crc kubenswrapper[4820]: I0221 08:49:39.491171 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"22fb8a14a5b24ef65062617fea32231c70d149b5d219d695fafaa29193fe4716"} Feb 21 08:49:41 crc kubenswrapper[4820]: I0221 08:49:41.510780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.572849 4820 generic.go:334] "Generic (PLEG): container finished" podID="ceace068-0023-4d48-b24d-30cafb14db01" containerID="0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1" exitCode=0 Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.572930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerDied","Data":"0634270323fa664eb00a226f803413936e00355a17c7245073d0bfb6257eeeb1"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.578935 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" exitCode=0 Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.578980 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} Feb 21 08:49:49 crc kubenswrapper[4820]: I0221 08:49:49.583137 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:49:50 crc kubenswrapper[4820]: I0221 08:49:50.588218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerStarted","Data":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} Feb 21 08:49:50 crc kubenswrapper[4820]: I0221 08:49:50.612477 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56bw7" podStartSLOduration=3.143706142 podStartE2EDuration="13.612454336s" podCreationTimestamp="2026-02-21 08:49:37 +0000 UTC" firstStartedPulling="2026-02-21 08:49:39.495856627 +0000 UTC m=+7354.528940825" lastFinishedPulling="2026-02-21 08:49:49.964604821 +0000 UTC m=+7364.997689019" observedRunningTime="2026-02-21 08:49:50.609405763 +0000 UTC m=+7365.642489961" watchObservedRunningTime="2026-02-21 08:49:50.612454336 +0000 UTC m=+7365.645538534" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.023064 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153503 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.153730 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") pod \"ceace068-0023-4d48-b24d-30cafb14db01\" (UID: \"ceace068-0023-4d48-b24d-30cafb14db01\") " Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.162203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx" (OuterVolumeSpecName: "kube-api-access-9xqzx") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "kube-api-access-9xqzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.195923 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory" (OuterVolumeSpecName: "inventory") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.197051 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ceace068-0023-4d48-b24d-30cafb14db01" (UID: "ceace068-0023-4d48-b24d-30cafb14db01"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256530 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256563 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xqzx\" (UniqueName: \"kubernetes.io/projected/ceace068-0023-4d48-b24d-30cafb14db01-kube-api-access-9xqzx\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.256573 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceace068-0023-4d48-b24d-30cafb14db01-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" event={"ID":"ceace068-0023-4d48-b24d-30cafb14db01","Type":"ContainerDied","Data":"d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529"} Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598120 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ad9c556fb047894c6a4ef09593a61fbc7b6753b33612f81c77baf66c0b7529" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.598143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-cggfb" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.676834 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:51 crc kubenswrapper[4820]: E0221 08:49:51.677369 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.677392 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.677636 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceace068-0023-4d48-b24d-30cafb14db01" containerName="configure-os-openstack-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.678536 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.680322 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.680882 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.681083 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.685358 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.694411 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766401 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766586 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.766631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.868639 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.873596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.883142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.898610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"ssh-known-hosts-openstack-4pwnt\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:51 crc kubenswrapper[4820]: I0221 08:49:51.998962 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:49:52 crc kubenswrapper[4820]: I0221 08:49:52.600758 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-4pwnt"] Feb 21 08:49:52 crc kubenswrapper[4820]: W0221 08:49:52.601164 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090d99c_7240_49ef_85d8_187c0cd6c146.slice/crio-02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65 WatchSource:0}: Error finding container 02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65: Status 404 returned error can't find the container with id 02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65 Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.617402 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerStarted","Data":"3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa"} Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.617655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerStarted","Data":"02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65"} Feb 21 08:49:53 crc kubenswrapper[4820]: I0221 08:49:53.641452 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-4pwnt" podStartSLOduration=2.225744524 podStartE2EDuration="2.641425834s" podCreationTimestamp="2026-02-21 08:49:51 +0000 UTC" firstStartedPulling="2026-02-21 08:49:52.604148181 +0000 UTC m=+7367.637232379" lastFinishedPulling="2026-02-21 08:49:53.019829471 +0000 UTC m=+7368.052913689" observedRunningTime="2026-02-21 08:49:53.639377968 +0000 UTC m=+7368.672462196" watchObservedRunningTime="2026-02-21 08:49:53.641425834 +0000 UTC m=+7368.674510052" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.147380 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.147726 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.208657 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.721587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:49:58 crc kubenswrapper[4820]: I0221 08:49:58.763719 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:00 crc kubenswrapper[4820]: I0221 08:50:00.687119 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56bw7" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" containerID="cri-o://9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" gracePeriod=2 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.197824 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262130 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.262370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") pod \"46eb670c-2901-4efd-b628-bbc1e5c02c60\" (UID: \"46eb670c-2901-4efd-b628-bbc1e5c02c60\") " Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.263353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities" (OuterVolumeSpecName: "utilities") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.269339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg" (OuterVolumeSpecName: "kube-api-access-746mg") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "kube-api-access-746mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.364917 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.364954 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746mg\" (UniqueName: \"kubernetes.io/projected/46eb670c-2901-4efd-b628-bbc1e5c02c60-kube-api-access-746mg\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.396792 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46eb670c-2901-4efd-b628-bbc1e5c02c60" (UID: "46eb670c-2901-4efd-b628-bbc1e5c02c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.466442 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46eb670c-2901-4efd-b628-bbc1e5c02c60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.698837 4820 generic.go:334] "Generic (PLEG): container finished" podID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerID="3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa" exitCode=0 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.701210 4820 generic.go:334] "Generic (PLEG): container finished" podID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" exitCode=0 Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.701357 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56bw7" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerDied","Data":"3d1b766e377f20c92c3eb643731421d1cc02bbe68c1fd5c38d4a9c93b90d83fa"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707323 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56bw7" event={"ID":"46eb670c-2901-4efd-b628-bbc1e5c02c60","Type":"ContainerDied","Data":"22fb8a14a5b24ef65062617fea32231c70d149b5d219d695fafaa29193fe4716"} Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.707376 4820 scope.go:117] "RemoveContainer" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.733097 4820 scope.go:117] "RemoveContainer" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.756543 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.768990 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56bw7"] Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.774757 4820 scope.go:117] "RemoveContainer" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807098 4820 scope.go:117] "RemoveContainer" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.807641 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": container with ID starting with 9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894 not found: ID does not exist" containerID="9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807749 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894"} err="failed to get container status \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": rpc error: code = NotFound desc = could not find container \"9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894\": container with ID starting with 9c0de537e8f26c4f32f44b8ccf15e31d20a1feb86a479fab34181069b6218894 not found: ID does not exist" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.807785 4820 scope.go:117] "RemoveContainer" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.808172 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": container with ID starting with 9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed not found: ID does not exist" containerID="9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808267 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed"} err="failed to get container status \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": rpc error: code = NotFound desc = could not find container \"9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed\": container with ID starting with 9d12964bbafe1e2a8a512f85b40610caae24f020d2dcb9dfce013b93afb969ed not found: ID does not exist" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808313 4820 scope.go:117] "RemoveContainer" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: E0221 08:50:01.808576 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": container with ID starting with 6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430 not found: ID does not exist" containerID="6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430" Feb 21 08:50:01 crc kubenswrapper[4820]: I0221 08:50:01.808605 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430"} err="failed to get container status \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": rpc error: code = NotFound desc = could not find container \"6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430\": container with ID starting with 6c19dc29f430aa80fff8995e4d1b525883c6f78221cedc2d6ccebdd85d7de430 not found: ID does not exist" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.106304 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200248 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.200427 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") pod \"2090d99c-7240-49ef-85d8-187c0cd6c146\" (UID: \"2090d99c-7240-49ef-85d8-187c0cd6c146\") " Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.205341 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h" (OuterVolumeSpecName: "kube-api-access-k5x5h") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "kube-api-access-k5x5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.231562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.233745 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2090d99c-7240-49ef-85d8-187c0cd6c146" (UID: "2090d99c-7240-49ef-85d8-187c0cd6c146"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302887 4820 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302919 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2090d99c-7240-49ef-85d8-187c0cd6c146-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.302928 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5x5h\" (UniqueName: \"kubernetes.io/projected/2090d99c-7240-49ef-85d8-187c0cd6c146-kube-api-access-k5x5h\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.707389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" path="/var/lib/kubelet/pods/46eb670c-2901-4efd-b628-bbc1e5c02c60/volumes" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-4pwnt" event={"ID":"2090d99c-7240-49ef-85d8-187c0cd6c146","Type":"ContainerDied","Data":"02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65"} Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724793 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f6591a0ed82e5e6b18be2bed8f80f23ff7b9289a30ba84cb3a3c16fc302f65" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.724835 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-4pwnt" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.809631 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810090 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-content" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810107 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-content" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810124 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810132 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810145 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: E0221 08:50:03.810168 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-utilities" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810180 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="extract-utilities" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810438 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="46eb670c-2901-4efd-b628-bbc1e5c02c60" containerName="registry-server" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.810473 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090d99c-7240-49ef-85d8-187c0cd6c146" containerName="ssh-known-hosts-openstack" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.811154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.814654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.817127 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.818338 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.818609 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.824138 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:03 crc kubenswrapper[4820]: I0221 08:50:03.915666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018474 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018561 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.018663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.023415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.024814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.034860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"run-os-openstack-openstack-cell1-57cnm\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.146211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:04 crc kubenswrapper[4820]: I0221 08:50:04.734006 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-57cnm"] Feb 21 08:50:04 crc kubenswrapper[4820]: W0221 08:50:04.739379 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ade5366_52be_4c8f_b9e2_1088b04caa90.slice/crio-9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905 WatchSource:0}: Error finding container 9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905: Status 404 returned error can't find the container with id 9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905 Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.748100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerStarted","Data":"22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460"} Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.748470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerStarted","Data":"9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905"} Feb 21 08:50:05 crc kubenswrapper[4820]: I0221 08:50:05.771927 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-57cnm" podStartSLOduration=2.312584654 podStartE2EDuration="2.771883189s" podCreationTimestamp="2026-02-21 08:50:03 +0000 UTC" firstStartedPulling="2026-02-21 08:50:04.7431678 +0000 UTC m=+7379.776251998" lastFinishedPulling="2026-02-21 08:50:05.202466335 +0000 UTC m=+7380.235550533" observedRunningTime="2026-02-21 08:50:05.770556994 +0000 UTC m=+7380.803641202" watchObservedRunningTime="2026-02-21 08:50:05.771883189 +0000 UTC m=+7380.804967387" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.706397 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.709210 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.721891 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.752895 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.753624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.753873 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.855846 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.856088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:10 crc kubenswrapper[4820]: I0221 08:50:10.878616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"community-operators-m9ngv\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.032252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.560937 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.834658 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" exitCode=0 Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.834813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d"} Feb 21 08:50:11 crc kubenswrapper[4820]: I0221 08:50:11.835049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"f842890b59139105b6197f590c30468b53f9a314697680080deb4c7ee76c2722"} Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.848518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.850642 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerID="22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460" exitCode=0 Feb 21 08:50:12 crc kubenswrapper[4820]: I0221 08:50:12.850682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerDied","Data":"22bd70473682d2534a6bda081c017f55cf901c4665ca71c7e2f078b002e52460"} Feb 21 08:50:13 crc kubenswrapper[4820]: I0221 08:50:13.815798 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:50:13 crc kubenswrapper[4820]: I0221 08:50:13.815859 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.275911 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.439902 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") pod \"4ade5366-52be-4c8f-b9e2-1088b04caa90\" (UID: \"4ade5366-52be-4c8f-b9e2-1088b04caa90\") " Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.446195 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd" (OuterVolumeSpecName: "kube-api-access-xw5xd") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "kube-api-access-xw5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.475055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.486876 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory" (OuterVolumeSpecName: "inventory") pod "4ade5366-52be-4c8f-b9e2-1088b04caa90" (UID: "4ade5366-52be-4c8f-b9e2-1088b04caa90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.541975 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.542011 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ade5366-52be-4c8f-b9e2-1088b04caa90-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.542020 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw5xd\" (UniqueName: \"kubernetes.io/projected/4ade5366-52be-4c8f-b9e2-1088b04caa90-kube-api-access-xw5xd\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.869364 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" exitCode=0 Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.869442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-57cnm" event={"ID":"4ade5366-52be-4c8f-b9e2-1088b04caa90","Type":"ContainerDied","Data":"9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905"} Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872968 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd453dfb0e382319e49ee39393f6c839d161b5bdb8fd5d02e254256d7567905" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.872966 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-57cnm" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.977289 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:14 crc kubenswrapper[4820]: E0221 08:50:14.977884 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.977900 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.978375 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ade5366-52be-4c8f-b9e2-1088b04caa90" containerName="run-os-openstack-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.979253 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983199 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983693 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.983922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:14 crc kubenswrapper[4820]: I0221 08:50:14.996425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155660 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155736 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.155757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257541 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.257563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.262893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.264172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.274733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"reboot-os-openstack-openstack-cell1-42cjk\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.311417 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.855089 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-42cjk"] Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.887305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerStarted","Data":"660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa"} Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.892177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerStarted","Data":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} Feb 21 08:50:15 crc kubenswrapper[4820]: I0221 08:50:15.917375 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m9ngv" podStartSLOduration=2.491821685 podStartE2EDuration="5.917351123s" podCreationTimestamp="2026-02-21 08:50:10 +0000 UTC" firstStartedPulling="2026-02-21 08:50:11.839373734 +0000 UTC m=+7386.872457922" lastFinishedPulling="2026-02-21 08:50:15.264903162 +0000 UTC m=+7390.297987360" observedRunningTime="2026-02-21 08:50:15.907393452 +0000 UTC m=+7390.940477670" watchObservedRunningTime="2026-02-21 08:50:15.917351123 +0000 UTC m=+7390.950435321" Feb 21 08:50:16 crc kubenswrapper[4820]: I0221 08:50:16.901078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerStarted","Data":"48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c"} Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.033159 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.033746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.099190 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:21 crc kubenswrapper[4820]: I0221 08:50:21.126195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" podStartSLOduration=6.672270227 podStartE2EDuration="7.126174556s" podCreationTimestamp="2026-02-21 08:50:14 +0000 UTC" firstStartedPulling="2026-02-21 08:50:15.869593095 +0000 UTC m=+7390.902677293" lastFinishedPulling="2026-02-21 08:50:16.323497424 +0000 UTC m=+7391.356581622" observedRunningTime="2026-02-21 08:50:16.9312181 +0000 UTC m=+7391.964302298" watchObservedRunningTime="2026-02-21 08:50:21.126174556 +0000 UTC m=+7396.159258764" Feb 21 08:50:22 crc kubenswrapper[4820]: I0221 08:50:22.002885 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:22 crc kubenswrapper[4820]: I0221 08:50:22.067619 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.748858 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.752027 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.762525 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.853779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.955563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.956105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.956216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.970654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m9ngv" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" containerID="cri-o://244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" gracePeriod=2 Feb 21 08:50:23 crc kubenswrapper[4820]: I0221 08:50:23.978291 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"redhat-marketplace-4h88z\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.076695 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:24 crc kubenswrapper[4820]: E0221 08:50:24.239188 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5865e706_eb59_4999_b451_4c5001489062.slice/crio-244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5865e706_eb59_4999_b451_4c5001489062.slice/crio-conmon-244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd.scope\": RecentStats: unable to find data in memory cache]" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.481514 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571764 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.571997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") pod \"5865e706-eb59-4999-b451-4c5001489062\" (UID: \"5865e706-eb59-4999-b451-4c5001489062\") " Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.573383 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities" (OuterVolumeSpecName: "utilities") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.578027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9" (OuterVolumeSpecName: "kube-api-access-g4km9") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "kube-api-access-g4km9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.610976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.629008 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5865e706-eb59-4999-b451-4c5001489062" (UID: "5865e706-eb59-4999-b451-4c5001489062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676197 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676230 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4km9\" (UniqueName: \"kubernetes.io/projected/5865e706-eb59-4999-b451-4c5001489062-kube-api-access-g4km9\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.676262 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5865e706-eb59-4999-b451-4c5001489062-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989293 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" exitCode=0 Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.989787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"396dcc2f3776efeb74c16675baa8e8700050f1b9c29a10cfc8a1c94667cc1207"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995901 4820 generic.go:334] "Generic (PLEG): container finished" podID="5865e706-eb59-4999-b451-4c5001489062" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" exitCode=0 Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995942 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9ngv" Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.995992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9ngv" event={"ID":"5865e706-eb59-4999-b451-4c5001489062","Type":"ContainerDied","Data":"f842890b59139105b6197f590c30468b53f9a314697680080deb4c7ee76c2722"} Feb 21 08:50:24 crc kubenswrapper[4820]: I0221 08:50:24.996013 4820 scope.go:117] "RemoveContainer" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.045819 4820 scope.go:117] "RemoveContainer" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.058134 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.066615 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m9ngv"] Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.086291 4820 scope.go:117] "RemoveContainer" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.115689 4820 scope.go:117] "RemoveContainer" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.116333 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": container with ID starting with 244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd not found: ID does not exist" containerID="244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116360 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd"} err="failed to get container status \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": rpc error: code = NotFound desc = could not find container \"244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd\": container with ID starting with 244d90fdddbd4db889b743532013f4b4594db7dfd61d3f8e07d8117f09f4fddd not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116388 4820 scope.go:117] "RemoveContainer" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.116712 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": container with ID starting with 8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2 not found: ID does not exist" containerID="8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116741 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2"} err="failed to get container status \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": rpc error: code = NotFound desc = could not find container \"8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2\": container with ID starting with 8f987f759da53ed1f72e97f6c174ef269d577eb10d5470ea1a3deed248825ba2 not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.116756 4820 scope.go:117] "RemoveContainer" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: E0221 08:50:25.117069 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": container with ID starting with d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d not found: ID does not exist" containerID="d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.117091 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d"} err="failed to get container status \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": rpc error: code = NotFound desc = could not find container \"d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d\": container with ID starting with d439c944ea1eb464fa4a795253f97a2958c55b50beb642c4aa69bd5c79149a8d not found: ID does not exist" Feb 21 08:50:25 crc kubenswrapper[4820]: I0221 08:50:25.710964 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5865e706-eb59-4999-b451-4c5001489062" path="/var/lib/kubelet/pods/5865e706-eb59-4999-b451-4c5001489062/volumes" Feb 21 08:50:26 crc kubenswrapper[4820]: I0221 08:50:26.008330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} Feb 21 08:50:27 crc kubenswrapper[4820]: I0221 08:50:27.021083 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" exitCode=0 Feb 21 08:50:27 crc kubenswrapper[4820]: I0221 08:50:27.021157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} Feb 21 08:50:28 crc kubenswrapper[4820]: I0221 08:50:28.032364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerStarted","Data":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} Feb 21 08:50:28 crc kubenswrapper[4820]: I0221 08:50:28.061801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4h88z" podStartSLOduration=2.441727177 podStartE2EDuration="5.061782758s" podCreationTimestamp="2026-02-21 08:50:23 +0000 UTC" firstStartedPulling="2026-02-21 08:50:24.99120205 +0000 UTC m=+7400.024286248" lastFinishedPulling="2026-02-21 08:50:27.611257631 +0000 UTC m=+7402.644341829" observedRunningTime="2026-02-21 08:50:28.05340044 +0000 UTC m=+7403.086484658" watchObservedRunningTime="2026-02-21 08:50:28.061782758 +0000 UTC m=+7403.094866956" Feb 21 08:50:33 crc kubenswrapper[4820]: I0221 08:50:33.089101 4820 generic.go:334] "Generic (PLEG): container finished" podID="4449546f-cb82-4976-b53e-cad851a6369d" containerID="48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c" exitCode=0 Feb 21 08:50:33 crc kubenswrapper[4820]: I0221 08:50:33.089170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerDied","Data":"48d8a8222a29a51075191c85ce26d089db15eaa1bb388d6665c186ce14164e1c"} Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.078120 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.079515 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.187942 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.656406 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757266 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.757301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") pod \"4449546f-cb82-4976-b53e-cad851a6369d\" (UID: \"4449546f-cb82-4976-b53e-cad851a6369d\") " Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.764314 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p" (OuterVolumeSpecName: "kube-api-access-mrh4p") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "kube-api-access-mrh4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.815447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.815500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory" (OuterVolumeSpecName: "inventory") pod "4449546f-cb82-4976-b53e-cad851a6369d" (UID: "4449546f-cb82-4976-b53e-cad851a6369d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.860434 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.861261 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4449546f-cb82-4976-b53e-cad851a6369d-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:34 crc kubenswrapper[4820]: I0221 08:50:34.861282 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrh4p\" (UniqueName: \"kubernetes.io/projected/4449546f-cb82-4976-b53e-cad851a6369d-kube-api-access-mrh4p\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" event={"ID":"4449546f-cb82-4976-b53e-cad851a6369d","Type":"ContainerDied","Data":"660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa"} Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117669 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660cea90273488044b3e84b2bdf63f34b73113632b5713fa9be7b6b1c4f0dfaa" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.117426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-42cjk" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.204992 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.209838 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210215 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210378 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210404 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-utilities" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210411 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-utilities" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210423 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210429 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: E0221 08:50:35.210476 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-content" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="extract-content" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210704 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5865e706-eb59-4999-b451-4c5001489062" containerName="registry-server" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.210714 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4449546f-cb82-4976-b53e-cad851a6369d" containerName="reboot-os-openstack-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.211376 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.214545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.215988 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.216306 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.217202 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218612 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.218839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.219140 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.240721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.312143 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371892 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371953 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.371979 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372162 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372204 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372267 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.372336 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474883 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474946 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.474980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.475006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.479565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481214 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481328 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.481654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482203 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.482901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.484760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.485707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.487555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.488738 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.507539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"install-certs-openstack-openstack-cell1-pdbm6\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.530258 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:50:35 crc kubenswrapper[4820]: I0221 08:50:35.896939 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-pdbm6"] Feb 21 08:50:35 crc kubenswrapper[4820]: W0221 08:50:35.905027 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddf72439_0ca3_4cbc_8186_fe74744a71e4.slice/crio-ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45 WatchSource:0}: Error finding container ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45: Status 404 returned error can't find the container with id ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45 Feb 21 08:50:36 crc kubenswrapper[4820]: I0221 08:50:36.126401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerStarted","Data":"ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45"} Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.143953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerStarted","Data":"80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94"} Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.144607 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4h88z" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" containerID="cri-o://07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" gracePeriod=2 Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.201634 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" podStartSLOduration=1.545287724 podStartE2EDuration="2.201614829s" podCreationTimestamp="2026-02-21 08:50:35 +0000 UTC" firstStartedPulling="2026-02-21 08:50:35.907446549 +0000 UTC m=+7410.940530747" lastFinishedPulling="2026-02-21 08:50:36.563773654 +0000 UTC m=+7411.596857852" observedRunningTime="2026-02-21 08:50:37.184044301 +0000 UTC m=+7412.217128529" watchObservedRunningTime="2026-02-21 08:50:37.201614829 +0000 UTC m=+7412.234699027" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.578927 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.719756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.719948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.720140 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") pod \"0df50340-ab5d-4f64-a931-2f795141a7d3\" (UID: \"0df50340-ab5d-4f64-a931-2f795141a7d3\") " Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.720884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities" (OuterVolumeSpecName: "utilities") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.726154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47" (OuterVolumeSpecName: "kube-api-access-jmm47") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "kube-api-access-jmm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.747861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0df50340-ab5d-4f64-a931-2f795141a7d3" (UID: "0df50340-ab5d-4f64-a931-2f795141a7d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822311 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822677 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0df50340-ab5d-4f64-a931-2f795141a7d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:37 crc kubenswrapper[4820]: I0221 08:50:37.822688 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmm47\" (UniqueName: \"kubernetes.io/projected/0df50340-ab5d-4f64-a931-2f795141a7d3-kube-api-access-jmm47\") on node \"crc\" DevicePath \"\"" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.169902 4820 generic.go:334] "Generic (PLEG): container finished" podID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" exitCode=0 Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.169963 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170014 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4h88z" event={"ID":"0df50340-ab5d-4f64-a931-2f795141a7d3","Type":"ContainerDied","Data":"396dcc2f3776efeb74c16675baa8e8700050f1b9c29a10cfc8a1c94667cc1207"} Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170010 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4h88z" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.170059 4820 scope.go:117] "RemoveContainer" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.197558 4820 scope.go:117] "RemoveContainer" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.239844 4820 scope.go:117] "RemoveContainer" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.241276 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.251971 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4h88z"] Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.273802 4820 scope.go:117] "RemoveContainer" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.274363 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": container with ID starting with 07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3 not found: ID does not exist" containerID="07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274420 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3"} err="failed to get container status \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": rpc error: code = NotFound desc = could not find container \"07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3\": container with ID starting with 07ae14bca01cf4ca34e0da1cb1b9eabe8263e01c6f7bf8351959d4991c8d9ef3 not found: ID does not exist" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274450 4820 scope.go:117] "RemoveContainer" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.274927 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": container with ID starting with 4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757 not found: ID does not exist" containerID="4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274950 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757"} err="failed to get container status \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": rpc error: code = NotFound desc = could not find container \"4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757\": container with ID starting with 4e436b3cfe04c38dded49fe7f66940addcbacd55f5fba682e4958a23aaf53757 not found: ID does not exist" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.274966 4820 scope.go:117] "RemoveContainer" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: E0221 08:50:38.275285 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": container with ID starting with 3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f not found: ID does not exist" containerID="3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f" Feb 21 08:50:38 crc kubenswrapper[4820]: I0221 08:50:38.275328 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f"} err="failed to get container status \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": rpc error: code = NotFound desc = could not find container \"3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f\": container with ID starting with 3cda80015b7e0ef61f8afbec3502d81530a2f59908d9c92e3bc25b3c0778718f not found: ID does not exist" Feb 21 08:50:39 crc kubenswrapper[4820]: I0221 08:50:39.708397 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" path="/var/lib/kubelet/pods/0df50340-ab5d-4f64-a931-2f795141a7d3/volumes" Feb 21 08:50:43 crc kubenswrapper[4820]: I0221 08:50:43.815759 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:50:43 crc kubenswrapper[4820]: I0221 08:50:43.816166 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:51:11 crc kubenswrapper[4820]: I0221 08:51:11.527103 4820 generic.go:334] "Generic (PLEG): container finished" podID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerID="80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94" exitCode=0 Feb 21 08:51:11 crc kubenswrapper[4820]: I0221 08:51:11.527163 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerDied","Data":"80da0a677fa473c619a4ef201c03d71a60e78328965d495f18ae4e687e4aea94"} Feb 21 08:51:12 crc kubenswrapper[4820]: I0221 08:51:12.976101 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.077888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.077960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078004 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078044 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.078928 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079339 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079376 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079404 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.079615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") pod \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\" (UID: \"ddf72439-0ca3-4cbc-8186-fe74744a71e4\") " Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.085964 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086159 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.086495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.087728 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088192 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088445 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7" (OuterVolumeSpecName: "kube-api-access-2tqd7") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "kube-api-access-2tqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.088526 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.089931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.090340 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.114450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory" (OuterVolumeSpecName: "inventory") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.123352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ddf72439-0ca3-4cbc-8186-fe74744a71e4" (UID: "ddf72439-0ca3-4cbc-8186-fe74744a71e4"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181852 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tqd7\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-kube-api-access-2tqd7\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181884 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181897 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181906 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181917 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181927 4820 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181936 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181945 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181954 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181964 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181973 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181981 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.181989 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.182000 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ddf72439-0ca3-4cbc-8186-fe74744a71e4-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.182009 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf72439-0ca3-4cbc-8186-fe74744a71e4-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" event={"ID":"ddf72439-0ca3-4cbc-8186-fe74744a71e4","Type":"ContainerDied","Data":"ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45"} Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550297 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce4aac2d6f3af594aff9228359ea0551acc3be6b1bc752c9dacd87ca44257c45" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.550352 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-pdbm6" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.707793 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708156 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708176 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708201 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-utilities" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708211 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-utilities" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-content" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708224 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="extract-content" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.708258 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708265 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708523 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df50340-ab5d-4f64-a931-2f795141a7d3" containerName="registry-server" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.708560 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf72439-0ca3-4cbc-8186-fe74744a71e4" containerName="install-certs-openstack-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.715610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.721593 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.721905 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.722050 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.724156 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.725677 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.729695 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818587 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818632 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.818667 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.819348 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.819402 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" gracePeriod=600 Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.898705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.899182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: I0221 08:51:13.900823 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:13 crc kubenswrapper[4820]: E0221 08:51:13.948834 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.002923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.004637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.007601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.007774 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.011740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.021311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"ovn-openstack-openstack-cell1-hxv8b\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.057615 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.563914 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-hxv8b"] Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564357 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5"} Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564272 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" exitCode=0 Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.564408 4820 scope.go:117] "RemoveContainer" containerID="1e52876a5d65e4dbfbc3bbb405ed2e1fc047a888cbb3cf03140368de5b7b9380" Feb 21 08:51:14 crc kubenswrapper[4820]: I0221 08:51:14.565086 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:14 crc kubenswrapper[4820]: E0221 08:51:14.565496 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.577376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerStarted","Data":"5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c"} Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.577921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerStarted","Data":"52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4"} Feb 21 08:51:15 crc kubenswrapper[4820]: I0221 08:51:15.603500 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" podStartSLOduration=2.061790109 podStartE2EDuration="2.603477892s" podCreationTimestamp="2026-02-21 08:51:13 +0000 UTC" firstStartedPulling="2026-02-21 08:51:14.568523732 +0000 UTC m=+7449.601607950" lastFinishedPulling="2026-02-21 08:51:15.110211535 +0000 UTC m=+7450.143295733" observedRunningTime="2026-02-21 08:51:15.598205489 +0000 UTC m=+7450.631289687" watchObservedRunningTime="2026-02-21 08:51:15.603477892 +0000 UTC m=+7450.636562090" Feb 21 08:51:29 crc kubenswrapper[4820]: I0221 08:51:29.698146 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:29 crc kubenswrapper[4820]: E0221 08:51:29.699531 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:41 crc kubenswrapper[4820]: I0221 08:51:41.696824 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:41 crc kubenswrapper[4820]: E0221 08:51:41.697563 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:51:52 crc kubenswrapper[4820]: I0221 08:51:52.697455 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:51:52 crc kubenswrapper[4820]: E0221 08:51:52.698830 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:05 crc kubenswrapper[4820]: I0221 08:52:05.705372 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:05 crc kubenswrapper[4820]: E0221 08:52:05.706346 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:14 crc kubenswrapper[4820]: I0221 08:52:14.125016 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerID="5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c" exitCode=0 Feb 21 08:52:14 crc kubenswrapper[4820]: I0221 08:52:14.125100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerDied","Data":"5e2d6c6a6a56b36d47f6d37b7a5e5d4e20ae4331c4dd41f9a9759c9682272d5c"} Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.596946 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.709948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710100 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.710561 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") pod \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\" (UID: \"7b3e6252-4e79-4ce6-87f1-8b0e8c885536\") " Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.717608 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.717977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7" (OuterVolumeSpecName: "kube-api-access-65pv7") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "kube-api-access-65pv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.737585 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.738634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.739320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory" (OuterVolumeSpecName: "inventory") pod "7b3e6252-4e79-4ce6-87f1-8b0e8c885536" (UID: "7b3e6252-4e79-4ce6-87f1-8b0e8c885536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812864 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812899 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812907 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65pv7\" (UniqueName: \"kubernetes.io/projected/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-kube-api-access-65pv7\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812916 4820 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:15 crc kubenswrapper[4820]: I0221 08:52:15.812927 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b3e6252-4e79-4ce6-87f1-8b0e8c885536-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" event={"ID":"7b3e6252-4e79-4ce6-87f1-8b0e8c885536","Type":"ContainerDied","Data":"52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4"} Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143621 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52286443a8a7e11d2180d2d7e026dd626ab9f7359a90fed94007042d0b395ec4" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.143651 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-hxv8b" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249118 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:16 crc kubenswrapper[4820]: E0221 08:52:16.249588 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249607 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.249809 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3e6252-4e79-4ce6-87f1-8b0e8c885536" containerName="ovn-openstack-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.250575 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254857 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254899 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.254922 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255188 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255196 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.255227 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.265165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323114 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323799 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.323936 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.425383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.425995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426102 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.426477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.429611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.431398 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.443165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"neutron-metadata-openstack-openstack-cell1-49ck6\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:16 crc kubenswrapper[4820]: I0221 08:52:16.574335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.115181 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-49ck6"] Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.155515 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerStarted","Data":"cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da"} Feb 21 08:52:17 crc kubenswrapper[4820]: I0221 08:52:17.701825 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:17 crc kubenswrapper[4820]: E0221 08:52:17.702366 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:18 crc kubenswrapper[4820]: I0221 08:52:18.165632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerStarted","Data":"88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d"} Feb 21 08:52:18 crc kubenswrapper[4820]: I0221 08:52:18.185101 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" podStartSLOduration=1.726083605 podStartE2EDuration="2.185085192s" podCreationTimestamp="2026-02-21 08:52:16 +0000 UTC" firstStartedPulling="2026-02-21 08:52:17.114107724 +0000 UTC m=+7512.147191922" lastFinishedPulling="2026-02-21 08:52:17.573109311 +0000 UTC m=+7512.606193509" observedRunningTime="2026-02-21 08:52:18.18021675 +0000 UTC m=+7513.213300948" watchObservedRunningTime="2026-02-21 08:52:18.185085192 +0000 UTC m=+7513.218169390" Feb 21 08:52:30 crc kubenswrapper[4820]: I0221 08:52:30.697388 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:30 crc kubenswrapper[4820]: E0221 08:52:30.698145 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:44 crc kubenswrapper[4820]: I0221 08:52:44.697785 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:44 crc kubenswrapper[4820]: E0221 08:52:44.698802 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:52:59 crc kubenswrapper[4820]: I0221 08:52:59.697003 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:52:59 crc kubenswrapper[4820]: E0221 08:52:59.697789 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:06 crc kubenswrapper[4820]: I0221 08:53:06.638231 4820 generic.go:334] "Generic (PLEG): container finished" podID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerID="88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d" exitCode=0 Feb 21 08:53:06 crc kubenswrapper[4820]: I0221 08:53:06.638316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerDied","Data":"88e57bcc025792d8e38fd0ad998e6fc47ccd2f16a36f693b956b897bdd02ed1d"} Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.087611 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212936 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.212983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.213234 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") pod \"915c12d6-5a69-4e4b-a001-b9e865d4377b\" (UID: \"915c12d6-5a69-4e4b-a001-b9e865d4377b\") " Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.226880 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.226892 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw" (OuterVolumeSpecName: "kube-api-access-58hfw") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "kube-api-access-58hfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.240026 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.241540 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.242414 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.250034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory" (OuterVolumeSpecName: "inventory") pod "915c12d6-5a69-4e4b-a001-b9e865d4377b" (UID: "915c12d6-5a69-4e4b-a001-b9e865d4377b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316404 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316440 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316452 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316463 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58hfw\" (UniqueName: \"kubernetes.io/projected/915c12d6-5a69-4e4b-a001-b9e865d4377b-kube-api-access-58hfw\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316475 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.316486 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c12d6-5a69-4e4b-a001-b9e865d4377b-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656068 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" event={"ID":"915c12d6-5a69-4e4b-a001-b9e865d4377b","Type":"ContainerDied","Data":"cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da"} Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656441 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2514c562a44f0a7fd5f11484927e8d4189244e682650e405bfba22a08315da" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.656143 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-49ck6" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.762662 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:08 crc kubenswrapper[4820]: E0221 08:53:08.763150 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.763175 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.763406 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="915c12d6-5a69-4e4b-a001-b9e865d4377b" containerName="neutron-metadata-openstack-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.764178 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.767589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.768360 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.771013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.776424 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:08 crc kubenswrapper[4820]: I0221 08:53:08.928992 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031754 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031820 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.031911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.032099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.035440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.035435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.036372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.036788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.052337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"libvirt-openstack-openstack-cell1-vxt45\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.091317 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.618057 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vxt45"] Feb 21 08:53:09 crc kubenswrapper[4820]: I0221 08:53:09.666606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerStarted","Data":"d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3"} Feb 21 08:53:10 crc kubenswrapper[4820]: I0221 08:53:10.678134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerStarted","Data":"ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3"} Feb 21 08:53:10 crc kubenswrapper[4820]: I0221 08:53:10.699070 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" podStartSLOduration=2.17756973 podStartE2EDuration="2.699049764s" podCreationTimestamp="2026-02-21 08:53:08 +0000 UTC" firstStartedPulling="2026-02-21 08:53:09.6300897 +0000 UTC m=+7564.663173898" lastFinishedPulling="2026-02-21 08:53:10.151569714 +0000 UTC m=+7565.184653932" observedRunningTime="2026-02-21 08:53:10.69483393 +0000 UTC m=+7565.727918138" watchObservedRunningTime="2026-02-21 08:53:10.699049764 +0000 UTC m=+7565.732133962" Feb 21 08:53:12 crc kubenswrapper[4820]: I0221 08:53:12.696727 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:12 crc kubenswrapper[4820]: E0221 08:53:12.697579 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:27 crc kubenswrapper[4820]: I0221 08:53:27.697076 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:27 crc kubenswrapper[4820]: E0221 08:53:27.697866 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:40 crc kubenswrapper[4820]: I0221 08:53:40.697714 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:40 crc kubenswrapper[4820]: E0221 08:53:40.698983 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:53:55 crc kubenswrapper[4820]: I0221 08:53:55.706086 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:53:55 crc kubenswrapper[4820]: E0221 08:53:55.706924 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:06 crc kubenswrapper[4820]: I0221 08:54:06.696980 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:06 crc kubenswrapper[4820]: E0221 08:54:06.698120 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:19 crc kubenswrapper[4820]: I0221 08:54:19.696855 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:19 crc kubenswrapper[4820]: E0221 08:54:19.697557 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:31 crc kubenswrapper[4820]: I0221 08:54:31.697662 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:31 crc kubenswrapper[4820]: E0221 08:54:31.705408 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:46 crc kubenswrapper[4820]: I0221 08:54:46.697402 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:46 crc kubenswrapper[4820]: E0221 08:54:46.698366 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:54:58 crc kubenswrapper[4820]: I0221 08:54:58.697539 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:54:58 crc kubenswrapper[4820]: E0221 08:54:58.698350 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:13 crc kubenswrapper[4820]: I0221 08:55:13.696934 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:13 crc kubenswrapper[4820]: E0221 08:55:13.697732 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:28 crc kubenswrapper[4820]: I0221 08:55:28.697216 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:28 crc kubenswrapper[4820]: E0221 08:55:28.698069 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:39 crc kubenswrapper[4820]: I0221 08:55:39.696965 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:39 crc kubenswrapper[4820]: E0221 08:55:39.697711 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:55:52 crc kubenswrapper[4820]: I0221 08:55:52.696208 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:55:52 crc kubenswrapper[4820]: E0221 08:55:52.697051 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:56:05 crc kubenswrapper[4820]: I0221 08:56:05.708173 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:56:05 crc kubenswrapper[4820]: E0221 08:56:05.709272 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 08:56:20 crc kubenswrapper[4820]: I0221 08:56:20.696763 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 08:56:21 crc kubenswrapper[4820]: I0221 08:56:21.554806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} Feb 21 08:57:28 crc kubenswrapper[4820]: I0221 08:57:28.781070 4820 generic.go:334] "Generic (PLEG): container finished" podID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerID="ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3" exitCode=0 Feb 21 08:57:28 crc kubenswrapper[4820]: I0221 08:57:28.781147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerDied","Data":"ddfbddd03a6e4efbd1ac4be5b0fef2e56a4e1b828e29effffb53ba3c5a926ea3"} Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.220455 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371415 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.371676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") pod \"d646e04b-4083-4b58-a73f-47c72ba78dcc\" (UID: \"d646e04b-4083-4b58-a73f-47c72ba78dcc\") " Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.376693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.376869 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9" (OuterVolumeSpecName: "kube-api-access-j2qq9") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "kube-api-access-j2qq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.401004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory" (OuterVolumeSpecName: "inventory") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.404298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.413151 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d646e04b-4083-4b58-a73f-47c72ba78dcc" (UID: "d646e04b-4083-4b58-a73f-47c72ba78dcc"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473934 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2qq9\" (UniqueName: \"kubernetes.io/projected/d646e04b-4083-4b58-a73f-47c72ba78dcc-kube-api-access-j2qq9\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473972 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473983 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.473992 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.474002 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646e04b-4083-4b58-a73f-47c72ba78dcc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" event={"ID":"d646e04b-4083-4b58-a73f-47c72ba78dcc","Type":"ContainerDied","Data":"d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3"} Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810645 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e9644c8bef8a192c46b655a73b8f20241d69342d79e5fe4036cd3cf7fab8a3" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.810511 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vxt45" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927039 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:30 crc kubenswrapper[4820]: E0221 08:57:30.927450 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927470 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.927687 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d646e04b-4083-4b58-a73f-47c72ba78dcc" containerName="libvirt-openstack-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.928390 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931559 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931689 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931696 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.931550 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.932178 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 08:57:30 crc kubenswrapper[4820]: I0221 08:57:30.944020 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.085955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086086 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086314 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.086344 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188483 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188740 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.188967 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189039 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.189470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.192995 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.193778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.195104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.195484 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.196335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.208159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"nova-cell1-openstack-openstack-cell1-w4sqf\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.248690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.767217 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-w4sqf"] Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.768634 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 08:57:31 crc kubenswrapper[4820]: I0221 08:57:31.820674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerStarted","Data":"d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576"} Feb 21 08:57:32 crc kubenswrapper[4820]: I0221 08:57:32.837191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerStarted","Data":"befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f"} Feb 21 08:57:32 crc kubenswrapper[4820]: I0221 08:57:32.881801 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" podStartSLOduration=2.484563344 podStartE2EDuration="2.881781842s" podCreationTimestamp="2026-02-21 08:57:30 +0000 UTC" firstStartedPulling="2026-02-21 08:57:31.768407493 +0000 UTC m=+7826.801491691" lastFinishedPulling="2026-02-21 08:57:32.165625971 +0000 UTC m=+7827.198710189" observedRunningTime="2026-02-21 08:57:32.866289832 +0000 UTC m=+7827.899374040" watchObservedRunningTime="2026-02-21 08:57:32.881781842 +0000 UTC m=+7827.914866040" Feb 21 08:58:43 crc kubenswrapper[4820]: I0221 08:58:43.816315 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:58:43 crc kubenswrapper[4820]: I0221 08:58:43.816859 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:13 crc kubenswrapper[4820]: I0221 08:59:13.815915 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:59:13 crc kubenswrapper[4820]: I0221 08:59:13.816564 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.816391 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.817164 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.817321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.818706 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 08:59:43 crc kubenswrapper[4820]: I0221 08:59:43.818837 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" gracePeriod=600 Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434058 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" exitCode=0 Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be"} Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} Feb 21 08:59:44 crc kubenswrapper[4820]: I0221 08:59:44.434771 4820 scope.go:117] "RemoveContainer" containerID="9e93abd76dfeed7843ec600497433419812967848e14c866f017680cd50608a5" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.159284 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.161501 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.164774 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.165092 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.170569 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.244533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345771 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345850 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.345914 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.347005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.352775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.363110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"collect-profiles-29527740-hbwp7\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.490569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:00 crc kubenswrapper[4820]: I0221 09:00:00.939835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7"] Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617527 4820 generic.go:334] "Generic (PLEG): container finished" podID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerID="7119a71f451213b74dba191bad0f1b026958a391d5a6a83fbc51ac9fc67a87c6" exitCode=0 Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617604 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerDied","Data":"7119a71f451213b74dba191bad0f1b026958a391d5a6a83fbc51ac9fc67a87c6"} Feb 21 09:00:01 crc kubenswrapper[4820]: I0221 09:00:01.617855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerStarted","Data":"017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35"} Feb 21 09:00:02 crc kubenswrapper[4820]: I0221 09:00:02.924568 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.000424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") pod \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\" (UID: \"81a30ae4-a5a5-4206-a3aa-b932f49d51fc\") " Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.001361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.005703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.005998 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j" (OuterVolumeSpecName: "kube-api-access-7hw6j") pod "81a30ae4-a5a5-4206-a3aa-b932f49d51fc" (UID: "81a30ae4-a5a5-4206-a3aa-b932f49d51fc"). InnerVolumeSpecName "kube-api-access-7hw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.102895 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.103192 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.103205 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hw6j\" (UniqueName: \"kubernetes.io/projected/81a30ae4-a5a5-4206-a3aa-b932f49d51fc-kube-api-access-7hw6j\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" event={"ID":"81a30ae4-a5a5-4206-a3aa-b932f49d51fc","Type":"ContainerDied","Data":"017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35"} Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636251 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017701d2c927143847805e9922d2a7e1890220323e6e1cd4b137722cb9f30e35" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.636258 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527740-hbwp7" Feb 21 09:00:03 crc kubenswrapper[4820]: I0221 09:00:03.996312 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.004515 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527695-zm45p"] Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.952309 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:04 crc kubenswrapper[4820]: E0221 09:00:04.953131 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.953154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.953414 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a30ae4-a5a5-4206-a3aa-b932f49d51fc" containerName="collect-profiles" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.955261 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:04 crc kubenswrapper[4820]: I0221 09:00:04.964875 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041099 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.041265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143094 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.143813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.165010 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"redhat-operators-jr97f\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.283885 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.707444 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fce41e0-c5c8-4286-8a58-cd620c05f4fc" path="/var/lib/kubelet/pods/6fce41e0-c5c8-4286-8a58-cd620c05f4fc/volumes" Feb 21 09:00:05 crc kubenswrapper[4820]: I0221 09:00:05.748035 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:05 crc kubenswrapper[4820]: W0221 09:00:05.755676 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda632316b_bf37_4d7e_8a47_1a4d453390bf.slice/crio-1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80 WatchSource:0}: Error finding container 1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80: Status 404 returned error can't find the container with id 1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80 Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.666909 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" exitCode=0 Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.667006 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554"} Feb 21 09:00:06 crc kubenswrapper[4820]: I0221 09:00:06.667290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80"} Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.678880 4820 generic.go:334] "Generic (PLEG): container finished" podID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerID="befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f" exitCode=0 Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.678960 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerDied","Data":"befeb7b23e0d32e5f4bea933447efbbf552e3f18186c4da87edc04135ee4581f"} Feb 21 09:00:07 crc kubenswrapper[4820]: I0221 09:00:07.681335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.102679 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240468 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240625 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240818 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240881 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240918 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.240947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") pod \"c653de2c-8672-42fb-81c0-4e66975a3b8f\" (UID: \"c653de2c-8672-42fb-81c0-4e66975a3b8f\") " Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.247635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.255228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq" (OuterVolumeSpecName: "kube-api-access-g56mq") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "kube-api-access-g56mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.270117 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory" (OuterVolumeSpecName: "inventory") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.271553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.272129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.279466 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.281746 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.285633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.288428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.291632 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.303652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "c653de2c-8672-42fb-81c0-4e66975a3b8f" (UID: "c653de2c-8672-42fb-81c0-4e66975a3b8f"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344447 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56mq\" (UniqueName: \"kubernetes.io/projected/c653de2c-8672-42fb-81c0-4e66975a3b8f-kube-api-access-g56mq\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344491 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344505 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344519 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344532 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344544 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344559 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344572 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344585 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344597 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.344609 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c653de2c-8672-42fb-81c0-4e66975a3b8f-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.703109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.717637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-w4sqf" event={"ID":"c653de2c-8672-42fb-81c0-4e66975a3b8f","Type":"ContainerDied","Data":"d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576"} Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.717700 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d278d55094e3f450ccb801d156b382e74efb6682a5f9d8786fa32fe1361576" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.795873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:09 crc kubenswrapper[4820]: E0221 09:00:09.796415 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.796439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.796674 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653de2c-8672-42fb-81c0-4e66975a3b8f" containerName="nova-cell1-openstack-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.797543 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.800003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.800088 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.801901 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.802338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.802562 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.806335 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853778 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.853940 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854108 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854162 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.854447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.956682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957034 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.957647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.960770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.961144 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.961147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.962031 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.962843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.963380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:09 crc kubenswrapper[4820]: I0221 09:00:09.975781 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"telemetry-openstack-openstack-cell1-wpbzs\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.120794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:00:10 crc kubenswrapper[4820]: W0221 09:00:10.671318 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab763aa_fd5e_41b2_96d8_f758ad76f779.slice/crio-36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f WatchSource:0}: Error finding container 36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f: Status 404 returned error can't find the container with id 36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.674222 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-wpbzs"] Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.713476 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" exitCode=0 Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.713550 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} Feb 21 09:00:10 crc kubenswrapper[4820]: I0221 09:00:10.715159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerStarted","Data":"36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.733101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerStarted","Data":"32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.735360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerStarted","Data":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.781728 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" podStartSLOduration=2.732144103 podStartE2EDuration="3.78170094s" podCreationTimestamp="2026-02-21 09:00:09 +0000 UTC" firstStartedPulling="2026-02-21 09:00:10.674802056 +0000 UTC m=+7985.707886254" lastFinishedPulling="2026-02-21 09:00:11.724358893 +0000 UTC m=+7986.757443091" observedRunningTime="2026-02-21 09:00:12.760431372 +0000 UTC m=+7987.793515570" watchObservedRunningTime="2026-02-21 09:00:12.78170094 +0000 UTC m=+7987.814785148" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.795418 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jr97f" podStartSLOduration=3.6099838650000002 podStartE2EDuration="8.795401872s" podCreationTimestamp="2026-02-21 09:00:04 +0000 UTC" firstStartedPulling="2026-02-21 09:00:06.669892272 +0000 UTC m=+7981.702976470" lastFinishedPulling="2026-02-21 09:00:11.855310279 +0000 UTC m=+7986.888394477" observedRunningTime="2026-02-21 09:00:12.785720229 +0000 UTC m=+7987.818804437" watchObservedRunningTime="2026-02-21 09:00:12.795401872 +0000 UTC m=+7987.828486070" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.944153 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.947161 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:12 crc kubenswrapper[4820]: I0221 09:00:12.963519 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.018383 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120885 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.120975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.121464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.121521 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.139538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"certified-operators-tt9xg\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.264096 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:13 crc kubenswrapper[4820]: I0221 09:00:13.831448 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.735570 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.738847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.764023 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768181 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" exitCode=0 Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97"} Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.768283 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"0e08f3f9df523ab78f42d321c8067aaecb631aa2e3579ca30e7e3b80406e5137"} Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.863631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.965984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.966617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.966875 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:14 crc kubenswrapper[4820]: I0221 09:00:14.986653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"community-operators-5km2z\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.061175 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.286036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.286074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.614156 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:15 crc kubenswrapper[4820]: W0221 09:00:15.615299 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f3a2fe_753d_4282_a97e_bd85b3116def.slice/crio-4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867 WatchSource:0}: Error finding container 4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867: Status 404 returned error can't find the container with id 4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867 Feb 21 09:00:15 crc kubenswrapper[4820]: I0221 09:00:15.778619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867"} Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.352318 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jr97f" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" probeResult="failure" output=< Feb 21 09:00:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:00:16 crc kubenswrapper[4820]: > Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.796125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.798691 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501" exitCode=0 Feb 21 09:00:16 crc kubenswrapper[4820]: I0221 09:00:16.798753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501"} Feb 21 09:00:18 crc kubenswrapper[4820]: I0221 09:00:18.819256 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6"} Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.846871 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6" exitCode=0 Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.846944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6"} Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.852581 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" exitCode=0 Feb 21 09:00:21 crc kubenswrapper[4820]: I0221 09:00:21.852626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.864321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerStarted","Data":"ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.867925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerStarted","Data":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.892074 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5km2z" podStartSLOduration=3.40571536 podStartE2EDuration="8.89205453s" podCreationTimestamp="2026-02-21 09:00:14 +0000 UTC" firstStartedPulling="2026-02-21 09:00:16.800932964 +0000 UTC m=+7991.834017172" lastFinishedPulling="2026-02-21 09:00:22.287272144 +0000 UTC m=+7997.320356342" observedRunningTime="2026-02-21 09:00:22.883827327 +0000 UTC m=+7997.916911525" watchObservedRunningTime="2026-02-21 09:00:22.89205453 +0000 UTC m=+7997.925138728" Feb 21 09:00:22 crc kubenswrapper[4820]: I0221 09:00:22.906398 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tt9xg" podStartSLOduration=3.478237868 podStartE2EDuration="10.906381719s" podCreationTimestamp="2026-02-21 09:00:12 +0000 UTC" firstStartedPulling="2026-02-21 09:00:14.794170139 +0000 UTC m=+7989.827254337" lastFinishedPulling="2026-02-21 09:00:22.22231399 +0000 UTC m=+7997.255398188" observedRunningTime="2026-02-21 09:00:22.904921409 +0000 UTC m=+7997.938005607" watchObservedRunningTime="2026-02-21 09:00:22.906381719 +0000 UTC m=+7997.939465917" Feb 21 09:00:23 crc kubenswrapper[4820]: I0221 09:00:23.264257 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:23 crc kubenswrapper[4820]: I0221 09:00:23.264521 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:24 crc kubenswrapper[4820]: I0221 09:00:24.315201 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tt9xg" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" probeResult="failure" output=< Feb 21 09:00:24 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:00:24 crc kubenswrapper[4820]: > Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.062381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.062439 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.106688 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.347598 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:25 crc kubenswrapper[4820]: I0221 09:00:25.411799 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:27 crc kubenswrapper[4820]: I0221 09:00:27.929838 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:27 crc kubenswrapper[4820]: I0221 09:00:27.930092 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jr97f" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" containerID="cri-o://4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" gracePeriod=2 Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.361892 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.547878 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") pod \"a632316b-bf37-4d7e-8a47-1a4d453390bf\" (UID: \"a632316b-bf37-4d7e-8a47-1a4d453390bf\") " Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.549718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities" (OuterVolumeSpecName: "utilities") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.553611 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq" (OuterVolumeSpecName: "kube-api-access-jj8nq") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "kube-api-access-jj8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.650855 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.650898 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8nq\" (UniqueName: \"kubernetes.io/projected/a632316b-bf37-4d7e-8a47-1a4d453390bf-kube-api-access-jj8nq\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.654587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a632316b-bf37-4d7e-8a47-1a4d453390bf" (UID: "a632316b-bf37-4d7e-8a47-1a4d453390bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.752640 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a632316b-bf37-4d7e-8a47-1a4d453390bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927785 4820 generic.go:334] "Generic (PLEG): container finished" podID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" exitCode=0 Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr97f" event={"ID":"a632316b-bf37-4d7e-8a47-1a4d453390bf","Type":"ContainerDied","Data":"1ee2abfede87870a65bb8478873d46fe7e14503bbfccdc858a20bfbafc3e4f80"} Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927874 4820 scope.go:117] "RemoveContainer" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.927878 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr97f" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.949478 4820 scope.go:117] "RemoveContainer" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.964913 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.975020 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jr97f"] Feb 21 09:00:28 crc kubenswrapper[4820]: I0221 09:00:28.990534 4820 scope.go:117] "RemoveContainer" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019024 4820 scope.go:117] "RemoveContainer" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.019469 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": container with ID starting with 4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a not found: ID does not exist" containerID="4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019519 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a"} err="failed to get container status \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": rpc error: code = NotFound desc = could not find container \"4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a\": container with ID starting with 4289157c450db48fdc0b5382fe0018dee5108cab3cf63fbf0e764bec5cf6379a not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.019552 4820 scope.go:117] "RemoveContainer" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.019994 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": container with ID starting with b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711 not found: ID does not exist" containerID="b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.020033 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711"} err="failed to get container status \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": rpc error: code = NotFound desc = could not find container \"b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711\": container with ID starting with b792900d2301508038419ccf40b767fee33eb1449ed74b69cf4b91b01b19e711 not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.020057 4820 scope.go:117] "RemoveContainer" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: E0221 09:00:29.021273 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": container with ID starting with 53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554 not found: ID does not exist" containerID="53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.021315 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554"} err="failed to get container status \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": rpc error: code = NotFound desc = could not find container \"53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554\": container with ID starting with 53f2f52cc69dcd5a5d95d65cb5a58d14dddd36448be764d0b2c74f1ceaedc554 not found: ID does not exist" Feb 21 09:00:29 crc kubenswrapper[4820]: I0221 09:00:29.709539 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" path="/var/lib/kubelet/pods/a632316b-bf37-4d7e-8a47-1a4d453390bf/volumes" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.537715 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538141 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538155 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538178 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-utilities" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538185 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-utilities" Feb 21 09:00:30 crc kubenswrapper[4820]: E0221 09:00:30.538199 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-content" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538205 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="extract-content" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.538431 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a632316b-bf37-4d7e-8a47-1a4d453390bf" containerName="registry-server" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.539925 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.550127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688310 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.688458 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.790556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.790758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.791708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.791717 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.792029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.813924 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"redhat-marketplace-9f6vx\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:30 crc kubenswrapper[4820]: I0221 09:00:30.858174 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.335832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956418 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" exitCode=0 Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9"} Feb 21 09:00:31 crc kubenswrapper[4820]: I0221 09:00:31.956520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"d9f16c78e908c2b3ec7f01837bc4438493ae43583de8dbcab16b77756c1e4737"} Feb 21 09:00:32 crc kubenswrapper[4820]: I0221 09:00:32.966599 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.311423 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.358946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.978449 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" exitCode=0 Feb 21 09:00:33 crc kubenswrapper[4820]: I0221 09:00:33.978522 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} Feb 21 09:00:34 crc kubenswrapper[4820]: I0221 09:00:34.988649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerStarted","Data":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.011195 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9f6vx" podStartSLOduration=2.534222785 podStartE2EDuration="5.011176889s" podCreationTimestamp="2026-02-21 09:00:30 +0000 UTC" firstStartedPulling="2026-02-21 09:00:31.958304562 +0000 UTC m=+8006.991388760" lastFinishedPulling="2026-02-21 09:00:34.435258656 +0000 UTC m=+8009.468342864" observedRunningTime="2026-02-21 09:00:35.006043699 +0000 UTC m=+8010.039127887" watchObservedRunningTime="2026-02-21 09:00:35.011176889 +0000 UTC m=+8010.044261117" Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.108691 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.930957 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:35 crc kubenswrapper[4820]: I0221 09:00:35.931497 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tt9xg" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" containerID="cri-o://73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" gracePeriod=2 Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.376333 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.502827 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") pod \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\" (UID: \"89494cdc-fddf-40b6-b3c2-31fd3d48810c\") " Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.503337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities" (OuterVolumeSpecName: "utilities") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.503855 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.508558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8" (OuterVolumeSpecName: "kube-api-access-vljj8") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "kube-api-access-vljj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.560816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89494cdc-fddf-40b6-b3c2-31fd3d48810c" (UID: "89494cdc-fddf-40b6-b3c2-31fd3d48810c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.606091 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89494cdc-fddf-40b6-b3c2-31fd3d48810c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:36 crc kubenswrapper[4820]: I0221 09:00:36.606130 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vljj8\" (UniqueName: \"kubernetes.io/projected/89494cdc-fddf-40b6-b3c2-31fd3d48810c-kube-api-access-vljj8\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.007975 4820 generic.go:334] "Generic (PLEG): container finished" podID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" exitCode=0 Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tt9xg" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008070 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tt9xg" event={"ID":"89494cdc-fddf-40b6-b3c2-31fd3d48810c","Type":"ContainerDied","Data":"0e08f3f9df523ab78f42d321c8067aaecb631aa2e3579ca30e7e3b80406e5137"} Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.008091 4820 scope.go:117] "RemoveContainer" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.026756 4820 scope.go:117] "RemoveContainer" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.041209 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.048869 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tt9xg"] Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.067186 4820 scope.go:117] "RemoveContainer" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.093512 4820 scope.go:117] "RemoveContainer" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094007 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": container with ID starting with 73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810 not found: ID does not exist" containerID="73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094085 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810"} err="failed to get container status \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": rpc error: code = NotFound desc = could not find container \"73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810\": container with ID starting with 73e7f48ae52b775c0bcd7715ac04a74d90fd35e8a8a19a9613712556f6595810 not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094118 4820 scope.go:117] "RemoveContainer" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094459 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": container with ID starting with 1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd not found: ID does not exist" containerID="1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094503 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd"} err="failed to get container status \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": rpc error: code = NotFound desc = could not find container \"1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd\": container with ID starting with 1da821af22233a18478483ee43a3e22aec210d04e5a6cd048960443a2888aebd not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094521 4820 scope.go:117] "RemoveContainer" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: E0221 09:00:37.094771 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": container with ID starting with 629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97 not found: ID does not exist" containerID="629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.094793 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97"} err="failed to get container status \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": rpc error: code = NotFound desc = could not find container \"629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97\": container with ID starting with 629c70301815835493f326dbd628b6b5c8e0c59a909a762bc9f43dca32e75e97 not found: ID does not exist" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.140623 4820 scope.go:117] "RemoveContainer" containerID="b04b97fcb09f93be41f1283cfb58d7e98542300a672bfd210a8873ecd384f3d2" Feb 21 09:00:37 crc kubenswrapper[4820]: I0221 09:00:37.706197 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" path="/var/lib/kubelet/pods/89494cdc-fddf-40b6-b3c2-31fd3d48810c/volumes" Feb 21 09:00:38 crc kubenswrapper[4820]: I0221 09:00:38.329948 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:38 crc kubenswrapper[4820]: I0221 09:00:38.330202 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5km2z" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" containerID="cri-o://ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" gracePeriod=2 Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.028352 4820 generic.go:334] "Generic (PLEG): container finished" podID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerID="ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" exitCode=0 Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.028397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e"} Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.335592 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.463972 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") pod \"61f3a2fe-753d-4282-a97e-bd85b3116def\" (UID: \"61f3a2fe-753d-4282-a97e-bd85b3116def\") " Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.464550 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities" (OuterVolumeSpecName: "utilities") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.475131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9" (OuterVolumeSpecName: "kube-api-access-jz6n9") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "kube-api-access-jz6n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.511867 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f3a2fe-753d-4282-a97e-bd85b3116def" (UID: "61f3a2fe-753d-4282-a97e-bd85b3116def"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565806 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565837 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz6n9\" (UniqueName: \"kubernetes.io/projected/61f3a2fe-753d-4282-a97e-bd85b3116def-kube-api-access-jz6n9\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:39 crc kubenswrapper[4820]: I0221 09:00:39.565858 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f3a2fe-753d-4282-a97e-bd85b3116def-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5km2z" event={"ID":"61f3a2fe-753d-4282-a97e-bd85b3116def","Type":"ContainerDied","Data":"4bc72efadaeb7e04a89e9f8ca46762f345b7c0fdc83dde64218845eff06d2867"} Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038819 4820 scope.go:117] "RemoveContainer" containerID="ca476df6658c0fbcfd9d0b7befc9653f90adbbe97d7c122482f8191801241a3e" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.038599 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5km2z" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.066898 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.067312 4820 scope.go:117] "RemoveContainer" containerID="11153aa6e4379a1c93fb4c95a36fc0e8b0f3604fa2272fced65db179d0c212c6" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.078598 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5km2z"] Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.093317 4820 scope.go:117] "RemoveContainer" containerID="72c9e4900e6ee50688f885f43a50c51001a88071dddf3cb2ab38cb306d156501" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.859139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.859538 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:40 crc kubenswrapper[4820]: I0221 09:00:40.901973 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:41 crc kubenswrapper[4820]: I0221 09:00:41.095621 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:41 crc kubenswrapper[4820]: I0221 09:00:41.712994 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" path="/var/lib/kubelet/pods/61f3a2fe-753d-4282-a97e-bd85b3116def/volumes" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.331475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.331956 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9f6vx" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" containerID="cri-o://c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" gracePeriod=2 Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.775606 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.871947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") pod \"e3d89d26-8978-4853-88af-a72edeee1b7a\" (UID: \"e3d89d26-8978-4853-88af-a72edeee1b7a\") " Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.872830 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities" (OuterVolumeSpecName: "utilities") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.879342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp" (OuterVolumeSpecName: "kube-api-access-2f9vp") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "kube-api-access-2f9vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.898774 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3d89d26-8978-4853-88af-a72edeee1b7a" (UID: "e3d89d26-8978-4853-88af-a72edeee1b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974591 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974625 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f9vp\" (UniqueName: \"kubernetes.io/projected/e3d89d26-8978-4853-88af-a72edeee1b7a-kube-api-access-2f9vp\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:44 crc kubenswrapper[4820]: I0221 09:00:44.974638 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3d89d26-8978-4853-88af-a72edeee1b7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084126 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" exitCode=0 Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084185 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f6vx" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084253 4820 scope.go:117] "RemoveContainer" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.084426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f6vx" event={"ID":"e3d89d26-8978-4853-88af-a72edeee1b7a","Type":"ContainerDied","Data":"d9f16c78e908c2b3ec7f01837bc4438493ae43583de8dbcab16b77756c1e4737"} Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.105829 4820 scope.go:117] "RemoveContainer" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.120145 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.129086 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f6vx"] Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.137537 4820 scope.go:117] "RemoveContainer" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.173828 4820 scope.go:117] "RemoveContainer" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.174296 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": container with ID starting with c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72 not found: ID does not exist" containerID="c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174334 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72"} err="failed to get container status \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": rpc error: code = NotFound desc = could not find container \"c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72\": container with ID starting with c03324d967c0c471eaf8b9c18a3f6dc5c95809f489ea516de3b7d03797d32f72 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174359 4820 scope.go:117] "RemoveContainer" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.174759 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": container with ID starting with 53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25 not found: ID does not exist" containerID="53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174789 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25"} err="failed to get container status \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": rpc error: code = NotFound desc = could not find container \"53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25\": container with ID starting with 53e43338c273f7c653430215e6e9efff0a4d2d94c38347c5f1f067bb4fb1fd25 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.174807 4820 scope.go:117] "RemoveContainer" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: E0221 09:00:45.175077 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": container with ID starting with 61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9 not found: ID does not exist" containerID="61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.175100 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9"} err="failed to get container status \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": rpc error: code = NotFound desc = could not find container \"61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9\": container with ID starting with 61fb303786bbcba55c36302210a1612e600333bbbd86e81fec30ec266a87d6e9 not found: ID does not exist" Feb 21 09:00:45 crc kubenswrapper[4820]: I0221 09:00:45.707427 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" path="/var/lib/kubelet/pods/e3d89d26-8978-4853-88af-a72edeee1b7a/volumes" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.156670 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157510 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157525 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157535 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157552 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-content" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157572 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157578 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157591 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157596 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157608 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157614 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157627 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157632 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="extract-utilities" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157643 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: E0221 09:01:00.157668 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157674 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f3a2fe-753d-4282-a97e-bd85b3116def" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157871 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d89d26-8978-4853-88af-a72edeee1b7a" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.157900 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="89494cdc-fddf-40b6-b3c2-31fd3d48810c" containerName="registry-server" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.158622 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.175331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299058 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299509 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.299826 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401490 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401634 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.401797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.409713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.412943 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.413222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.425524 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"keystone-cron-29527741-49n79\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.479958 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:00 crc kubenswrapper[4820]: W0221 09:01:00.910108 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c3e367e_0369_46eb_8886_a7d40b0a6626.slice/crio-7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500 WatchSource:0}: Error finding container 7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500: Status 404 returned error can't find the container with id 7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500 Feb 21 09:01:00 crc kubenswrapper[4820]: I0221 09:01:00.913516 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29527741-49n79"] Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.232110 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerStarted","Data":"8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605"} Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.233953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerStarted","Data":"7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500"} Feb 21 09:01:01 crc kubenswrapper[4820]: I0221 09:01:01.253604 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29527741-49n79" podStartSLOduration=1.253579555 podStartE2EDuration="1.253579555s" podCreationTimestamp="2026-02-21 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:01:01.246941636 +0000 UTC m=+8036.280025834" watchObservedRunningTime="2026-02-21 09:01:01.253579555 +0000 UTC m=+8036.286663753" Feb 21 09:01:04 crc kubenswrapper[4820]: I0221 09:01:04.260045 4820 generic.go:334] "Generic (PLEG): container finished" podID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerID="8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605" exitCode=0 Feb 21 09:01:04 crc kubenswrapper[4820]: I0221 09:01:04.260168 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerDied","Data":"8314297c5b1616002e5ed95b6471d2b5cf2824be89387b21d10eebe341031605"} Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.620186 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707735 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.707961 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.708035 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") pod \"7c3e367e-0369-46eb-8886-a7d40b0a6626\" (UID: \"7c3e367e-0369-46eb-8886-a7d40b0a6626\") " Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.712396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82" (OuterVolumeSpecName: "kube-api-access-6pf82") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "kube-api-access-6pf82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.714182 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.736179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.760808 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data" (OuterVolumeSpecName: "config-data") pod "7c3e367e-0369-46eb-8886-a7d40b0a6626" (UID: "7c3e367e-0369-46eb-8886-a7d40b0a6626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810881 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810911 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810921 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c3e367e-0369-46eb-8886-a7d40b0a6626-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:05 crc kubenswrapper[4820]: I0221 09:01:05.810933 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pf82\" (UniqueName: \"kubernetes.io/projected/7c3e367e-0369-46eb-8886-a7d40b0a6626-kube-api-access-6pf82\") on node \"crc\" DevicePath \"\"" Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280281 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29527741-49n79" event={"ID":"7c3e367e-0369-46eb-8886-a7d40b0a6626","Type":"ContainerDied","Data":"7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500"} Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280332 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2796cdaa07751ea17ecec7c22feac2ef3f7053e3d36cb4f111be7accabd500" Feb 21 09:01:06 crc kubenswrapper[4820]: I0221 09:01:06.280427 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29527741-49n79" Feb 21 09:02:13 crc kubenswrapper[4820]: I0221 09:02:13.816446 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:02:13 crc kubenswrapper[4820]: I0221 09:02:13.816952 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:02:43 crc kubenswrapper[4820]: I0221 09:02:43.816004 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:02:43 crc kubenswrapper[4820]: I0221 09:02:43.816635 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816200 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816635 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.816677 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.817394 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:03:13 crc kubenswrapper[4820]: I0221 09:03:13.817441 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" gracePeriod=600 Feb 21 09:03:13 crc kubenswrapper[4820]: E0221 09:03:13.942375 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275086 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" exitCode=0 Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018"} Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275180 4820 scope.go:117] "RemoveContainer" containerID="fab82e286429ea9948c3cdafd64bd6f5f6d2085b288f12fbb11541642982e3be" Feb 21 09:03:14 crc kubenswrapper[4820]: I0221 09:03:14.275879 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:14 crc kubenswrapper[4820]: E0221 09:03:14.276134 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:25 crc kubenswrapper[4820]: I0221 09:03:25.711076 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:25 crc kubenswrapper[4820]: E0221 09:03:25.712153 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:31 crc kubenswrapper[4820]: I0221 09:03:31.463654 4820 generic.go:334] "Generic (PLEG): container finished" podID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerID="32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1" exitCode=0 Feb 21 09:03:31 crc kubenswrapper[4820]: I0221 09:03:31.463772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerDied","Data":"32ed0675cc1bb9abf6513200d402c172eeaa97600b28f6d1ad6567f4b0f54be1"} Feb 21 09:03:32 crc kubenswrapper[4820]: I0221 09:03:32.928053 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099786 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.099954 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.100032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.100103 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") pod \"dab763aa-fd5e-41b2-96d8-f758ad76f779\" (UID: \"dab763aa-fd5e-41b2-96d8-f758ad76f779\") " Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.107475 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm" (OuterVolumeSpecName: "kube-api-access-tt2tm") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "kube-api-access-tt2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.109090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.136660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.138322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.145732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory" (OuterVolumeSpecName: "inventory") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.152203 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.158515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "dab763aa-fd5e-41b2-96d8-f758ad76f779" (UID: "dab763aa-fd5e-41b2-96d8-f758ad76f779"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202565 4820 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202804 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202879 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.202982 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt2tm\" (UniqueName: \"kubernetes.io/projected/dab763aa-fd5e-41b2-96d8-f758ad76f779-kube-api-access-tt2tm\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203053 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203118 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.203289 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dab763aa-fd5e-41b2-96d8-f758ad76f779-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" event={"ID":"dab763aa-fd5e-41b2-96d8-f758ad76f779","Type":"ContainerDied","Data":"36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f"} Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493341 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36c774a1836920a1c6a3632cabbb4951058702b6304881a9b065f60c5557314f" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.493149 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-wpbzs" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659002 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:33 crc kubenswrapper[4820]: E0221 09:03:33.659508 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: E0221 09:03:33.659549 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659557 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659800 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3e367e-0369-46eb-8886-a7d40b0a6626" containerName="keystone-cron" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.659833 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab763aa-fd5e-41b2-96d8-f758ad76f779" containerName="telemetry-openstack-openstack-cell1" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.660791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.663986 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.664271 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.665758 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.666647 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:03:33 crc kubenswrapper[4820]: I0221 09:03:33.667014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.019939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020003 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.020102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.030173 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.121806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.125851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.126192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.126680 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.127217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.137212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"neutron-sriov-openstack-openstack-cell1-z2jbg\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.305320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.812100 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:03:34 crc kubenswrapper[4820]: I0221 09:03:34.816441 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-z2jbg"] Feb 21 09:03:35 crc kubenswrapper[4820]: I0221 09:03:35.516140 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerStarted","Data":"b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a"} Feb 21 09:03:36 crc kubenswrapper[4820]: I0221 09:03:36.529726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerStarted","Data":"94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3"} Feb 21 09:03:36 crc kubenswrapper[4820]: I0221 09:03:36.554520 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" podStartSLOduration=3.102626996 podStartE2EDuration="3.554501465s" podCreationTimestamp="2026-02-21 09:03:33 +0000 UTC" firstStartedPulling="2026-02-21 09:03:34.811878634 +0000 UTC m=+8189.844962832" lastFinishedPulling="2026-02-21 09:03:35.263753103 +0000 UTC m=+8190.296837301" observedRunningTime="2026-02-21 09:03:36.545017616 +0000 UTC m=+8191.578101814" watchObservedRunningTime="2026-02-21 09:03:36.554501465 +0000 UTC m=+8191.587585663" Feb 21 09:03:37 crc kubenswrapper[4820]: I0221 09:03:37.697270 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:37 crc kubenswrapper[4820]: E0221 09:03:37.697615 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:48 crc kubenswrapper[4820]: I0221 09:03:48.697688 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:48 crc kubenswrapper[4820]: E0221 09:03:48.698478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:03:59 crc kubenswrapper[4820]: I0221 09:03:59.697203 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:03:59 crc kubenswrapper[4820]: E0221 09:03:59.698072 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:13 crc kubenswrapper[4820]: I0221 09:04:13.697770 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:13 crc kubenswrapper[4820]: E0221 09:04:13.698987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:28 crc kubenswrapper[4820]: I0221 09:04:28.696309 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:28 crc kubenswrapper[4820]: E0221 09:04:28.697168 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:33 crc kubenswrapper[4820]: I0221 09:04:33.098860 4820 generic.go:334] "Generic (PLEG): container finished" podID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerID="94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3" exitCode=0 Feb 21 09:04:33 crc kubenswrapper[4820]: I0221 09:04:33.098938 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerDied","Data":"94581acefe6fd0d5e61b7cc4451c08195f92f5c85da82b73824c4c107abb0aa3"} Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.756130 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.856905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857020 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857108 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857144 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.857952 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") pod \"f751ca69-8835-4c27-b4ab-9dac973aacd6\" (UID: \"f751ca69-8835-4c27-b4ab-9dac973aacd6\") " Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.864457 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.866156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt" (OuterVolumeSpecName: "kube-api-access-w94tt") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "kube-api-access-w94tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.891386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory" (OuterVolumeSpecName: "inventory") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.892319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.895627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "f751ca69-8835-4c27-b4ab-9dac973aacd6" (UID: "f751ca69-8835-4c27-b4ab-9dac973aacd6"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960067 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960110 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960120 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960132 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f751ca69-8835-4c27-b4ab-9dac973aacd6-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:34 crc kubenswrapper[4820]: I0221 09:04:34.960142 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w94tt\" (UniqueName: \"kubernetes.io/projected/f751ca69-8835-4c27-b4ab-9dac973aacd6-kube-api-access-w94tt\") on node \"crc\" DevicePath \"\"" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" event={"ID":"f751ca69-8835-4c27-b4ab-9dac973aacd6","Type":"ContainerDied","Data":"b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a"} Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138852 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65081ab1b23aa2859be07ab92b79121f6d0bff1798435348ffd99b0bf7bb83a" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.138873 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-z2jbg" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.208735 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:35 crc kubenswrapper[4820]: E0221 09:04:35.209281 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.209305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.211120 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f751ca69-8835-4c27-b4ab-9dac973aacd6" containerName="neutron-sriov-openstack-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.218829 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.223886 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224119 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224310 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224489 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.224603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.239954 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367220 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367622 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.367924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.368012 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469397 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.469640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.473788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.473858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.475403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.477025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.487213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"neutron-dhcp-openstack-openstack-cell1-brfqb\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:35 crc kubenswrapper[4820]: I0221 09:04:35.541267 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:04:36 crc kubenswrapper[4820]: I0221 09:04:36.091005 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-brfqb"] Feb 21 09:04:36 crc kubenswrapper[4820]: I0221 09:04:36.149200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerStarted","Data":"cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799"} Feb 21 09:04:37 crc kubenswrapper[4820]: I0221 09:04:37.159315 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerStarted","Data":"c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1"} Feb 21 09:04:37 crc kubenswrapper[4820]: I0221 09:04:37.178515 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" podStartSLOduration=1.723226254 podStartE2EDuration="2.178496995s" podCreationTimestamp="2026-02-21 09:04:35 +0000 UTC" firstStartedPulling="2026-02-21 09:04:36.103431379 +0000 UTC m=+8251.136515587" lastFinishedPulling="2026-02-21 09:04:36.55870213 +0000 UTC m=+8251.591786328" observedRunningTime="2026-02-21 09:04:37.173676984 +0000 UTC m=+8252.206761182" watchObservedRunningTime="2026-02-21 09:04:37.178496995 +0000 UTC m=+8252.211581193" Feb 21 09:04:43 crc kubenswrapper[4820]: I0221 09:04:43.697354 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:43 crc kubenswrapper[4820]: E0221 09:04:43.698127 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:04:57 crc kubenswrapper[4820]: I0221 09:04:57.697770 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:04:57 crc kubenswrapper[4820]: E0221 09:04:57.698764 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:11 crc kubenswrapper[4820]: I0221 09:05:11.697306 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:11 crc kubenswrapper[4820]: E0221 09:05:11.698056 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:26 crc kubenswrapper[4820]: I0221 09:05:26.699669 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:26 crc kubenswrapper[4820]: E0221 09:05:26.700844 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:41 crc kubenswrapper[4820]: I0221 09:05:41.697315 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:41 crc kubenswrapper[4820]: E0221 09:05:41.698409 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:05:44 crc kubenswrapper[4820]: I0221 09:05:44.909162 4820 generic.go:334] "Generic (PLEG): container finished" podID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerID="c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1" exitCode=0 Feb 21 09:05:44 crc kubenswrapper[4820]: I0221 09:05:44.909205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerDied","Data":"c288b7a025e7d3b15f4932bf29ced80e553c63aa3651970cafa9392f6c8aaba1"} Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.310740 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.436536 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.436888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.437459 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") pod \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\" (UID: \"0e84eaf9-2cd2-457c-b532-d632db99ba6e\") " Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.444492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.444790 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8" (OuterVolumeSpecName: "kube-api-access-trdg8") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "kube-api-access-trdg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.471687 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory" (OuterVolumeSpecName: "inventory") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.476046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.488359 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "0e84eaf9-2cd2-457c-b532-d632db99ba6e" (UID: "0e84eaf9-2cd2-457c-b532-d632db99ba6e"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.540812 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541022 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541105 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541178 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trdg8\" (UniqueName: \"kubernetes.io/projected/0e84eaf9-2cd2-457c-b532-d632db99ba6e-kube-api-access-trdg8\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.541280 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e84eaf9-2cd2-457c-b532-d632db99ba6e-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.930943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" event={"ID":"0e84eaf9-2cd2-457c-b532-d632db99ba6e","Type":"ContainerDied","Data":"cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799"} Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.931002 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb2fe4c4633138cc3b323da622f03723105033ded32247d9712201b94c948799" Feb 21 09:05:46 crc kubenswrapper[4820]: I0221 09:05:46.931031 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-brfqb" Feb 21 09:05:53 crc kubenswrapper[4820]: I0221 09:05:53.696796 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:05:53 crc kubenswrapper[4820]: E0221 09:05:53.697495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:04 crc kubenswrapper[4820]: I0221 09:06:04.696995 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:04 crc kubenswrapper[4820]: E0221 09:06:04.697812 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.202169 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.202949 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.701580 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.701818 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" containerID="cri-o://583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.856435 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.856655 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" containerID="cri-o://5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930098 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930616 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" containerID="cri-o://acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.930731 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" containerID="cri-o://9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.971466 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.972094 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" containerID="cri-o://f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" gracePeriod=30 Feb 21 09:06:12 crc kubenswrapper[4820]: I0221 09:06:12.972176 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" containerID="cri-o://3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" gracePeriod=30 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.189271 4820 generic.go:334] "Generic (PLEG): container finished" podID="febb41c5-cb59-4868-b57d-63b20b422240" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" exitCode=143 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.189335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.191892 4820 generic.go:334] "Generic (PLEG): container finished" podID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerID="f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" exitCode=143 Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.191930 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261"} Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.434548 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.436090 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.437414 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 21 09:06:13 crc kubenswrapper[4820]: E0221 09:06:13.437482 4820 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:13 crc kubenswrapper[4820]: I0221 09:06:13.964074 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014717 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.014939 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") pod \"bef3408d-c90c-48d8-85fa-366e68d6e66d\" (UID: \"bef3408d-c90c-48d8-85fa-366e68d6e66d\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.015668 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.032523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92" (OuterVolumeSpecName: "kube-api-access-gbj92") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "kube-api-access-gbj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.052515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.069517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data" (OuterVolumeSpecName: "config-data") pod "bef3408d-c90c-48d8-85fa-366e68d6e66d" (UID: "bef3408d-c90c-48d8-85fa-366e68d6e66d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.116775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.116829 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117010 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") pod \"ff2505a3-9888-436f-9e92-045fb71aac57\" (UID: \"ff2505a3-9888-436f-9e92-045fb71aac57\") " Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117423 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117440 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef3408d-c90c-48d8-85fa-366e68d6e66d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.117451 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbj92\" (UniqueName: \"kubernetes.io/projected/bef3408d-c90c-48d8-85fa-366e68d6e66d-kube-api-access-gbj92\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.120336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm" (OuterVolumeSpecName: "kube-api-access-5j8bm") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "kube-api-access-5j8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.143201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data" (OuterVolumeSpecName: "config-data") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.144778 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff2505a3-9888-436f-9e92-045fb71aac57" (UID: "ff2505a3-9888-436f-9e92-045fb71aac57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205740 4820 generic.go:334] "Generic (PLEG): container finished" podID="ff2505a3-9888-436f-9e92-045fb71aac57" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" exitCode=0 Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerDied","Data":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205824 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff2505a3-9888-436f-9e92-045fb71aac57","Type":"ContainerDied","Data":"e739d22e8a5fb67dd8a38933da1b7cdbf628d65d406c279afc479f8a5e13a79c"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205830 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.205841 4820 scope.go:117] "RemoveContainer" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211749 4820 generic.go:334] "Generic (PLEG): container finished" podID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" exitCode=0 Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211780 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerDied","Data":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bef3408d-c90c-48d8-85fa-366e68d6e66d","Type":"ContainerDied","Data":"2a0a262a8895e6cca872ab1c86adcc262df33c931a7351df26a0b7545670d96f"} Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.211839 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219113 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219147 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8bm\" (UniqueName: \"kubernetes.io/projected/ff2505a3-9888-436f-9e92-045fb71aac57-kube-api-access-5j8bm\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.219160 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2505a3-9888-436f-9e92-045fb71aac57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.237185 4820 scope.go:117] "RemoveContainer" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.241508 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": container with ID starting with b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e not found: ID does not exist" containerID="b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.241564 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e"} err="failed to get container status \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": rpc error: code = NotFound desc = could not find container \"b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e\": container with ID starting with b596d72880695f00e03a63ea8f7961ced841daa9b3ff4ecd3419e0aec954126e not found: ID does not exist" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.241590 4820 scope.go:117] "RemoveContainer" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.265320 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.283891 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.293748 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.300599 4820 scope.go:117] "RemoveContainer" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.301907 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.303114 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": container with ID starting with 583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47 not found: ID does not exist" containerID="583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.303144 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47"} err="failed to get container status \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": rpc error: code = NotFound desc = could not find container \"583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47\": container with ID starting with 583c115acbee834b6fcb43dfd8d96db7b16389501ebdb840a3e3361b59fece47 not found: ID does not exist" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311326 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311700 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311716 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311732 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311739 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: E0221 09:06:14.311772 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311780 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311970 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" containerName="nova-cell0-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311983 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e84eaf9-2cd2-457c-b532-d632db99ba6e" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.311993 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" containerName="nova-cell1-conductor-conductor" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.312731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.315455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.325876 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.327740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.334035 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.345862 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.360864 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424090 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.424587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526189 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526255 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526418 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.526465 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.531202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.531599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.542187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1747f740-f880-4c19-817b-c9341c1179e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.542436 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de2ed9-8828-4c5e-af1e-24c752565d74-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.546685 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9f8p\" (UniqueName: \"kubernetes.io/projected/d4de2ed9-8828-4c5e-af1e-24c752565d74-kube-api-access-k9f8p\") pod \"nova-cell0-conductor-0\" (UID: \"d4de2ed9-8828-4c5e-af1e-24c752565d74\") " pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.551155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdhwm\" (UniqueName: \"kubernetes.io/projected/1747f740-f880-4c19-817b-c9341c1179e7-kube-api-access-kdhwm\") pod \"nova-cell1-conductor-0\" (UID: \"1747f740-f880-4c19-817b-c9341c1179e7\") " pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.644295 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:14 crc kubenswrapper[4820]: I0221 09:06:14.655845 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.146501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.167257 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.297785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1747f740-f880-4c19-817b-c9341c1179e7","Type":"ContainerStarted","Data":"e401112f74a2cfe1e3c2eab499282feb86545a3543e17a7461af78e50f166e4d"} Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.298637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4de2ed9-8828-4c5e-af1e-24c752565d74","Type":"ContainerStarted","Data":"eb2350dbca4068fd3d7b5187ffdbbad53f077cf0583131b4e6e47c513ff87b58"} Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.709612 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef3408d-c90c-48d8-85fa-366e68d6e66d" path="/var/lib/kubelet/pods/bef3408d-c90c-48d8-85fa-366e68d6e66d/volumes" Feb 21 09:06:15 crc kubenswrapper[4820]: I0221 09:06:15.710751 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2505a3-9888-436f-9e92-045fb71aac57" path="/var/lib/kubelet/pods/ff2505a3-9888-436f-9e92-045fb71aac57/volumes" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.327986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d4de2ed9-8828-4c5e-af1e-24c752565d74","Type":"ContainerStarted","Data":"c916061f222d05c6361a31199f1e68c39c4f1b523bcfd2639057574a3efd1eff"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.328445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.331873 4820 generic.go:334] "Generic (PLEG): container finished" podID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerID="3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" exitCode=0 Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.331928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.337929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1747f740-f880-4c19-817b-c9341c1179e7","Type":"ContainerStarted","Data":"a342203a671583fac859bf8833c1ba92fb31d1c27dfa3491077ab1360325af8b"} Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.338910 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.354017 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.353997486 podStartE2EDuration="2.353997486s" podCreationTimestamp="2026-02-21 09:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:16.352138675 +0000 UTC m=+8351.385222863" watchObservedRunningTime="2026-02-21 09:06:16.353997486 +0000 UTC m=+8351.387081674" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.355687 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": read tcp 10.217.0.2:45938->10.217.1.99:8774: read: connection reset by peer" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.355723 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.99:8774/\": read tcp 10.217.0.2:45934->10.217.1.99:8774: read: connection reset by peer" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.581647 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.628859 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.62883347 podStartE2EDuration="2.62883347s" podCreationTimestamp="2026-02-21 09:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:16.373167167 +0000 UTC m=+8351.406251375" watchObservedRunningTime="2026-02-21 09:06:16.62883347 +0000 UTC m=+8351.661917668" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.678947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679054 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679072 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.679300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") pod \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\" (UID: \"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.680027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs" (OuterVolumeSpecName: "logs") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.689905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w" (OuterVolumeSpecName: "kube-api-access-57j5w") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "kube-api-access-57j5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.752358 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.774731 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.780893 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data" (OuterVolumeSpecName: "config-data") pod "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" (UID: "4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783603 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-logs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783625 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57j5w\" (UniqueName: \"kubernetes.io/projected/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-kube-api-access-57j5w\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783636 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783645 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.783654 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.816808 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.986897 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987180 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.987553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") pod \"febb41c5-cb59-4868-b57d-63b20b422240\" (UID: \"febb41c5-cb59-4868-b57d-63b20b422240\") " Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.990471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs" (OuterVolumeSpecName: "logs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:06:16 crc kubenswrapper[4820]: I0221 09:06:16.995497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v" (OuterVolumeSpecName: "kube-api-access-5kl5v") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "kube-api-access-5kl5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.023005 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data" (OuterVolumeSpecName: "config-data") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.036574 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.040903 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.061015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "febb41c5-cb59-4868-b57d-63b20b422240" (UID: "febb41c5-cb59-4868-b57d-63b20b422240"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090721 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kl5v\" (UniqueName: \"kubernetes.io/projected/febb41c5-cb59-4868-b57d-63b20b422240-kube-api-access-5kl5v\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090780 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090795 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febb41c5-cb59-4868-b57d-63b20b422240-logs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090809 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090822 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.090836 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/febb41c5-cb59-4868-b57d-63b20b422240-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348435 4820 generic.go:334] "Generic (PLEG): container finished" podID="febb41c5-cb59-4868-b57d-63b20b422240" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" exitCode=0 Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348517 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348603 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348660 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febb41c5-cb59-4868-b57d-63b20b422240","Type":"ContainerDied","Data":"f84f25836fa8a5c0573e20405d3a79bd27bbd629ad136467d54a559c6258e788"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.348683 4820 scope.go:117] "RemoveContainer" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.350952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf","Type":"ContainerDied","Data":"693d0232eb9d3d5e0ecbe5f8fe7211549dd820f567608a73163321471ff6aae0"} Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.351115 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.381266 4820 scope.go:117] "RemoveContainer" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.401180 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.409693 4820 scope.go:117] "RemoveContainer" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.410632 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": container with ID starting with 9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301 not found: ID does not exist" containerID="9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.410676 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301"} err="failed to get container status \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": rpc error: code = NotFound desc = could not find container \"9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301\": container with ID starting with 9d005827d2fc444e4f4c9b411c47618ba67878e33afb0b120d2a797e42a3f301 not found: ID does not exist" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.410708 4820 scope.go:117] "RemoveContainer" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.411530 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": container with ID starting with acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054 not found: ID does not exist" containerID="acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.411583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054"} err="failed to get container status \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": rpc error: code = NotFound desc = could not find container \"acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054\": container with ID starting with acd8d37222b1af5b297e80731dae1cc951567b0fd1c6969c4d4dd9d51cc54054 not found: ID does not exist" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.411609 4820 scope.go:117] "RemoveContainer" containerID="3bc8a51d89a75337ed95a4da428a2c5cd89eada5282bff5c15d37e08160dc6cd" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.429034 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.440550 4820 scope.go:117] "RemoveContainer" containerID="f54650c953f71352ebf3663fefc2c46a1224cdbd7d75aace44661c3d5cae2261" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.441063 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.460966 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474025 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474526 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474543 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474566 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474596 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474602 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: E0221 09:06:17.474614 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474621 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474785 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474801 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" containerName="nova-metadata-metadata" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474819 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-api" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.474833 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb41c5-cb59-4868-b57d-63b20b422240" containerName="nova-api-log" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.475851 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.485702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.486175 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.492091 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.505312 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.507138 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.509548 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.517209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602921 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.602995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.603363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.705975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706183 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706259 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.706341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.707219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-logs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.707843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c9db30-edab-4679-a671-15ae25d6448b-logs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.710798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.712873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-config-data\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.714013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.715330 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf" path="/var/lib/kubelet/pods/4a5efcf2-dfdc-4c49-85f1-ccbd24edaebf/volumes" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.716591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-public-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c9db30-edab-4679-a671-15ae25d6448b-config-data\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.717654 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febb41c5-cb59-4868-b57d-63b20b422240" path="/var/lib/kubelet/pods/febb41c5-cb59-4868-b57d-63b20b422240/volumes" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.720974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.721739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrfn\" (UniqueName: \"kubernetes.io/projected/77c9db30-edab-4679-a671-15ae25d6448b-kube-api-access-qvrfn\") pod \"nova-metadata-0\" (UID: \"77c9db30-edab-4679-a671-15ae25d6448b\") " pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.724482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w6mq\" (UniqueName: \"kubernetes.io/projected/eae0a5ff-41ba-4522-a7f0-e69ff23ee566-kube-api-access-2w6mq\") pod \"nova-api-0\" (UID: \"eae0a5ff-41ba-4522-a7f0-e69ff23ee566\") " pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.880892 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.892062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 21 09:06:17 crc kubenswrapper[4820]: I0221 09:06:17.930933 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.019859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.020185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.020297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") pod \"475239fa-3785-4704-bef1-f554cf694456\" (UID: \"475239fa-3785-4704-bef1-f554cf694456\") " Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.033009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7" (OuterVolumeSpecName: "kube-api-access-n4mt7") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "kube-api-access-n4mt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.059870 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.093343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data" (OuterVolumeSpecName: "config-data") pod "475239fa-3785-4704-bef1-f554cf694456" (UID: "475239fa-3785-4704-bef1-f554cf694456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127634 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127667 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475239fa-3785-4704-bef1-f554cf694456-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.127677 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4mt7\" (UniqueName: \"kubernetes.io/projected/475239fa-3785-4704-bef1-f554cf694456-kube-api-access-n4mt7\") on node \"crc\" DevicePath \"\"" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388349 4820 generic.go:334] "Generic (PLEG): container finished" podID="475239fa-3785-4704-bef1-f554cf694456" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" exitCode=0 Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388422 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerDied","Data":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388527 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"475239fa-3785-4704-bef1-f554cf694456","Type":"ContainerDied","Data":"b9728440a68a14dc6808fd23c52f77370ca72000bc7bcb7fce2546c782ccca62"} Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.388547 4820 scope.go:117] "RemoveContainer" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.424464 4820 scope.go:117] "RemoveContainer" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.430800 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": container with ID starting with 5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286 not found: ID does not exist" containerID="5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.431106 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286"} err="failed to get container status \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": rpc error: code = NotFound desc = could not find container \"5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286\": container with ID starting with 5a6032c17ab45c33940f5af7177aa3d6b297cec4cb7d4ce125faba043afcd286 not found: ID does not exist" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.461775 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.487014 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.518289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.531972 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.532466 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.532483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.532677 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="475239fa-3785-4704-bef1-f554cf694456" containerName="nova-scheduler-scheduler" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.533519 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.555446 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.575835 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.640719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.691147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.699038 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:18 crc kubenswrapper[4820]: E0221 09:06:18.699266 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.742666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.760367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.774873 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1667b0-00cb-4768-97cb-de0ee527f829-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.785113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9r5\" (UniqueName: \"kubernetes.io/projected/4d1667b0-00cb-4768-97cb-de0ee527f829-kube-api-access-zs9r5\") pod \"nova-scheduler-0\" (UID: \"4d1667b0-00cb-4768-97cb-de0ee527f829\") " pod="openstack/nova-scheduler-0" Feb 21 09:06:18 crc kubenswrapper[4820]: I0221 09:06:18.885665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.359647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 21 09:06:19 crc kubenswrapper[4820]: W0221 09:06:19.362697 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1667b0_00cb_4768_97cb_de0ee527f829.slice/crio-86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0 WatchSource:0}: Error finding container 86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0: Status 404 returned error can't find the container with id 86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0 Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"90db96dd0b8cc6de40f50614565a48c9546bd2da2255453fcb933fa7bbaf4de2"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404367 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"890630f2409d6d8a71452e5106fa511add930de6350c38167e1a89bd0d53903d"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.404381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77c9db30-edab-4679-a671-15ae25d6448b","Type":"ContainerStarted","Data":"fbaaca88dbffda7c47585c253c606ae5fdd51f9eae104cdb70b1cbf1a100091e"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.419128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1667b0-00cb-4768-97cb-de0ee527f829","Type":"ContainerStarted","Data":"86c2dd15af493902bd804940199fc8d78e1a4be5b320ee1c4cbd8494cfe974c0"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"0afa42237db1caf0aa5af9a485d9edeb1e300defb8d0dd5c9606d344cdfa2116"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"b862c7f4100cdf40e3f49957cf7d67764dbefbfaf926027a7198f4d58f541147"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.430210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eae0a5ff-41ba-4522-a7f0-e69ff23ee566","Type":"ContainerStarted","Data":"7d4cbefd15d015c5d34edbde0d8787d31158158f9304859821966d9653463e90"} Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.433215 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433195814 podStartE2EDuration="2.433195814s" podCreationTimestamp="2026-02-21 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:19.428657911 +0000 UTC m=+8354.461742129" watchObservedRunningTime="2026-02-21 09:06:19.433195814 +0000 UTC m=+8354.466280012" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.471155 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.471132205 podStartE2EDuration="2.471132205s" podCreationTimestamp="2026-02-21 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:19.452109658 +0000 UTC m=+8354.485193856" watchObservedRunningTime="2026-02-21 09:06:19.471132205 +0000 UTC m=+8354.504216393" Feb 21 09:06:19 crc kubenswrapper[4820]: I0221 09:06:19.709389 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475239fa-3785-4704-bef1-f554cf694456" path="/var/lib/kubelet/pods/475239fa-3785-4704-bef1-f554cf694456/volumes" Feb 21 09:06:20 crc kubenswrapper[4820]: I0221 09:06:20.446904 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1667b0-00cb-4768-97cb-de0ee527f829","Type":"ContainerStarted","Data":"02b24a320803b1dc026e2d46c72eb57f0dbd9ec5eab126940cd196ecb0f9db78"} Feb 21 09:06:20 crc kubenswrapper[4820]: I0221 09:06:20.476474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.476453085 podStartE2EDuration="2.476453085s" podCreationTimestamp="2026-02-21 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:06:20.467136401 +0000 UTC m=+8355.500220599" watchObservedRunningTime="2026-02-21 09:06:20.476453085 +0000 UTC m=+8355.509537283" Feb 21 09:06:22 crc kubenswrapper[4820]: I0221 09:06:22.881462 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 09:06:22 crc kubenswrapper[4820]: I0221 09:06:22.881837 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 21 09:06:23 crc kubenswrapper[4820]: I0221 09:06:23.886399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 21 09:06:24 crc kubenswrapper[4820]: I0221 09:06:24.677440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 21 09:06:24 crc kubenswrapper[4820]: I0221 09:06:24.693081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.881820 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.882174 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.892467 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 09:06:27 crc kubenswrapper[4820]: I0221 09:06:27.892512 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.886339 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.895480 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77c9db30-edab-4679-a671-15ae25d6448b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.895510 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77c9db30-edab-4679-a671-15ae25d6448b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.907449 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eae0a5ff-41ba-4522-a7f0-e69ff23ee566" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.907464 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eae0a5ff-41ba-4522-a7f0-e69ff23ee566" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 21 09:06:28 crc kubenswrapper[4820]: I0221 09:06:28.926833 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 21 09:06:29 crc kubenswrapper[4820]: I0221 09:06:29.567215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 21 09:06:33 crc kubenswrapper[4820]: I0221 09:06:33.697132 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:33 crc kubenswrapper[4820]: E0221 09:06:33.697445 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.886605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.888470 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.891232 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.898108 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.898498 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.900031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 21 09:06:37 crc kubenswrapper[4820]: I0221 09:06:37.905445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.623477 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.628615 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 21 09:06:38 crc kubenswrapper[4820]: I0221 09:06:38.633085 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.968916 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.971054 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.973770 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s6fgp" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975261 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975293 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975317 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.975377 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 21 09:06:39 crc kubenswrapper[4820]: I0221 09:06:39.977927 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096328 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096421 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096464 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096489 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096614 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.096846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198890 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.198942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199134 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199172 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.199295 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.200559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205505 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.205509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.206470 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.207829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.208434 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.209185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.223101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.290551 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:06:40 crc kubenswrapper[4820]: W0221 09:06:40.853493 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2666b573_2e76_4374_9fd9_39ac7aabddef.slice/crio-0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f WatchSource:0}: Error finding container 0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f: Status 404 returned error can't find the container with id 0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f Feb 21 09:06:40 crc kubenswrapper[4820]: I0221 09:06:40.853511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5"] Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.654418 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerStarted","Data":"eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841"} Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.654663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerStarted","Data":"0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f"} Feb 21 09:06:41 crc kubenswrapper[4820]: I0221 09:06:41.680387 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" podStartSLOduration=2.197717235 podStartE2EDuration="2.68036394s" podCreationTimestamp="2026-02-21 09:06:39 +0000 UTC" firstStartedPulling="2026-02-21 09:06:40.857861073 +0000 UTC m=+8375.890945271" lastFinishedPulling="2026-02-21 09:06:41.340507778 +0000 UTC m=+8376.373591976" observedRunningTime="2026-02-21 09:06:41.677884553 +0000 UTC m=+8376.710968751" watchObservedRunningTime="2026-02-21 09:06:41.68036394 +0000 UTC m=+8376.713448138" Feb 21 09:06:45 crc kubenswrapper[4820]: I0221 09:06:45.713993 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:06:45 crc kubenswrapper[4820]: E0221 09:06:45.716987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:00 crc kubenswrapper[4820]: I0221 09:07:00.697130 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:00 crc kubenswrapper[4820]: E0221 09:07:00.698216 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:11 crc kubenswrapper[4820]: I0221 09:07:11.697015 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:11 crc kubenswrapper[4820]: E0221 09:07:11.698215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:24 crc kubenswrapper[4820]: I0221 09:07:24.697102 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:24 crc kubenswrapper[4820]: E0221 09:07:24.698495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:36 crc kubenswrapper[4820]: I0221 09:07:36.696720 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:36 crc kubenswrapper[4820]: E0221 09:07:36.697714 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:07:50 crc kubenswrapper[4820]: I0221 09:07:50.696515 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:07:50 crc kubenswrapper[4820]: E0221 09:07:50.697646 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:08:05 crc kubenswrapper[4820]: I0221 09:08:05.707056 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:08:05 crc kubenswrapper[4820]: E0221 09:08:05.708034 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:08:17 crc kubenswrapper[4820]: I0221 09:08:17.697576 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:08:18 crc kubenswrapper[4820]: I0221 09:08:18.674738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} Feb 21 09:10:04 crc kubenswrapper[4820]: I0221 09:10:04.748038 4820 generic.go:334] "Generic (PLEG): container finished" podID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerID="eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841" exitCode=0 Feb 21 09:10:04 crc kubenswrapper[4820]: I0221 09:10:04.748133 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerDied","Data":"eacd316e3479702ece893fba6b1dd2ceabf934c321e66004e672da9c73a4e841"} Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.190883 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379129 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379470 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379832 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379899 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.379956 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.380003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.380104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") pod \"2666b573-2e76-4374-9fd9-39ac7aabddef\" (UID: \"2666b573-2e76-4374-9fd9-39ac7aabddef\") " Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.386202 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4" (OuterVolumeSpecName: "kube-api-access-7zdj4") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "kube-api-access-7zdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.397085 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.409756 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.411280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.412653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.413959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.423740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.424200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory" (OuterVolumeSpecName: "inventory") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.424321 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.431895 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.434931 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2666b573-2e76-4374-9fd9-39ac7aabddef" (UID: "2666b573-2e76-4374-9fd9-39ac7aabddef"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483759 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483803 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483813 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483824 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483837 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483847 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483859 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdj4\" (UniqueName: \"kubernetes.io/projected/2666b573-2e76-4374-9fd9-39ac7aabddef-kube-api-access-7zdj4\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483871 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-inventory\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483880 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483892 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.483903 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2666b573-2e76-4374-9fd9-39ac7aabddef-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766795 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" event={"ID":"2666b573-2e76-4374-9fd9-39ac7aabddef","Type":"ContainerDied","Data":"0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f"} Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766831 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb78463b2f50530730496f209ecd44b1ce385dad5b54014a0f9ce0eeb68352f" Feb 21 09:10:06 crc kubenswrapper[4820]: I0221 09:10:06.766845 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5" Feb 21 09:10:25 crc kubenswrapper[4820]: E0221 09:10:25.594230 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:49456->38.102.83.201:43255: write tcp 38.102.83.201:49456->38.102.83.201:43255: write: broken pipe Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.070655 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:35 crc kubenswrapper[4820]: E0221 09:10:35.071621 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.071638 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.071849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2666b573-2e76-4374-9fd9-39ac7aabddef" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.073206 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.085623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.219504 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.219808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.220002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322195 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.322461 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.323217 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.323206 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.348285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"certified-operators-xnscq\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.395975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:35 crc kubenswrapper[4820]: I0221 09:10:35.963621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:36 crc kubenswrapper[4820]: I0221 09:10:36.058458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"3b55eef8026f41e4a67d5df7b619e17617d87f8da719b1494c48d75452b2a6af"} Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.069954 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" exitCode=0 Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.070046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7"} Feb 21 09:10:37 crc kubenswrapper[4820]: I0221 09:10:37.073340 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:10:38 crc kubenswrapper[4820]: I0221 09:10:38.082817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} Feb 21 09:10:39 crc kubenswrapper[4820]: I0221 09:10:39.097681 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" exitCode=0 Feb 21 09:10:39 crc kubenswrapper[4820]: I0221 09:10:39.097909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} Feb 21 09:10:40 crc kubenswrapper[4820]: I0221 09:10:40.113499 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerStarted","Data":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} Feb 21 09:10:40 crc kubenswrapper[4820]: I0221 09:10:40.142022 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xnscq" podStartSLOduration=2.739347097 podStartE2EDuration="5.142004186s" podCreationTimestamp="2026-02-21 09:10:35 +0000 UTC" firstStartedPulling="2026-02-21 09:10:37.072984325 +0000 UTC m=+8612.106068523" lastFinishedPulling="2026-02-21 09:10:39.475641414 +0000 UTC m=+8614.508725612" observedRunningTime="2026-02-21 09:10:40.138494901 +0000 UTC m=+8615.171579129" watchObservedRunningTime="2026-02-21 09:10:40.142004186 +0000 UTC m=+8615.175088384" Feb 21 09:10:43 crc kubenswrapper[4820]: I0221 09:10:43.816551 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:10:43 crc kubenswrapper[4820]: I0221 09:10:43.816946 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.396878 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.398626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:45 crc kubenswrapper[4820]: I0221 09:10:45.452813 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:46 crc kubenswrapper[4820]: I0221 09:10:46.232949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:46 crc kubenswrapper[4820]: I0221 09:10:46.301546 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.200104 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xnscq" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" containerID="cri-o://6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" gracePeriod=2 Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.708890 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864854 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.864977 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") pod \"7f67a876-9f79-4e98-98d9-c8f80940528f\" (UID: \"7f67a876-9f79-4e98-98d9-c8f80940528f\") " Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.867853 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities" (OuterVolumeSpecName: "utilities") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.873626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l" (OuterVolumeSpecName: "kube-api-access-ssn9l") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "kube-api-access-ssn9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.923878 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f67a876-9f79-4e98-98d9-c8f80940528f" (UID: "7f67a876-9f79-4e98-98d9-c8f80940528f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969033 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969099 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssn9l\" (UniqueName: \"kubernetes.io/projected/7f67a876-9f79-4e98-98d9-c8f80940528f-kube-api-access-ssn9l\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:48 crc kubenswrapper[4820]: I0221 09:10:48.969116 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f67a876-9f79-4e98-98d9-c8f80940528f-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214541 4820 generic.go:334] "Generic (PLEG): container finished" podID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" exitCode=0 Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214899 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnscq" event={"ID":"7f67a876-9f79-4e98-98d9-c8f80940528f","Type":"ContainerDied","Data":"3b55eef8026f41e4a67d5df7b619e17617d87f8da719b1494c48d75452b2a6af"} Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214928 4820 scope.go:117] "RemoveContainer" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.214721 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnscq" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.239709 4820 scope.go:117] "RemoveContainer" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.269344 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.307819 4820 scope.go:117] "RemoveContainer" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.336156 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xnscq"] Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.340563 4820 scope.go:117] "RemoveContainer" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.341327 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": container with ID starting with 6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f not found: ID does not exist" containerID="6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341418 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f"} err="failed to get container status \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": rpc error: code = NotFound desc = could not find container \"6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f\": container with ID starting with 6ba31b526580583beb91c22c12d72ba89592a430de9ab8e321d458af25cb600f not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341467 4820 scope.go:117] "RemoveContainer" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.341922 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": container with ID starting with 2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9 not found: ID does not exist" containerID="2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.341976 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9"} err="failed to get container status \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": rpc error: code = NotFound desc = could not find container \"2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9\": container with ID starting with 2974cba8d2a1706b6bda86cf496d6decaf99da9febf2a4f80e8f97b884fa83c9 not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.342008 4820 scope.go:117] "RemoveContainer" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: E0221 09:10:49.342415 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": container with ID starting with 5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7 not found: ID does not exist" containerID="5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.342453 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7"} err="failed to get container status \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": rpc error: code = NotFound desc = could not find container \"5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7\": container with ID starting with 5ee63def4dafa353be7526c2ce24d5a2158b20d3d9db3e5634f785e07c5754b7 not found: ID does not exist" Feb 21 09:10:49 crc kubenswrapper[4820]: I0221 09:10:49.717576 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" path="/var/lib/kubelet/pods/7f67a876-9f79-4e98-98d9-c8f80940528f/volumes" Feb 21 09:11:13 crc kubenswrapper[4820]: I0221 09:11:13.815934 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:11:13 crc kubenswrapper[4820]: I0221 09:11:13.816787 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.807159 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808678 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808703 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808727 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-utilities" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808739 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-utilities" Feb 21 09:11:25 crc kubenswrapper[4820]: E0221 09:11:25.808775 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-content" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.808788 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="extract-content" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.809107 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f67a876-9f79-4e98-98d9-c8f80940528f" containerName="registry-server" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.818144 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.847888 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969827 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:25 crc kubenswrapper[4820]: I0221 09:11:25.969920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.071838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.071911 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.072609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.098874 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"redhat-marketplace-c67xc\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.158689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:26 crc kubenswrapper[4820]: I0221 09:11:26.661251 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619018 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" exitCode=0 Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef"} Feb 21 09:11:27 crc kubenswrapper[4820]: I0221 09:11:27.619445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerStarted","Data":"0f24824b62c5dcb7203393a1912dfcae426de62ec3168651c566197ab53cbe13"} Feb 21 09:11:28 crc kubenswrapper[4820]: I0221 09:11:28.630009 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" exitCode=0 Feb 21 09:11:28 crc kubenswrapper[4820]: I0221 09:11:28.630057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7"} Feb 21 09:11:29 crc kubenswrapper[4820]: I0221 09:11:29.643368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerStarted","Data":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} Feb 21 09:11:29 crc kubenswrapper[4820]: I0221 09:11:29.686373 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c67xc" podStartSLOduration=3.299600461 podStartE2EDuration="4.686344373s" podCreationTimestamp="2026-02-21 09:11:25 +0000 UTC" firstStartedPulling="2026-02-21 09:11:27.621617144 +0000 UTC m=+8662.654701342" lastFinishedPulling="2026-02-21 09:11:29.008361046 +0000 UTC m=+8664.041445254" observedRunningTime="2026-02-21 09:11:29.672026204 +0000 UTC m=+8664.705110412" watchObservedRunningTime="2026-02-21 09:11:29.686344373 +0000 UTC m=+8664.719428591" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.159414 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.160316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.205008 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.762736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:36 crc kubenswrapper[4820]: I0221 09:11:36.816554 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:38 crc kubenswrapper[4820]: I0221 09:11:38.730145 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c67xc" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" containerID="cri-o://19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" gracePeriod=2 Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.299079 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365145 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.365174 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") pod \"18749304-4042-46e9-8641-963815f5659c\" (UID: \"18749304-4042-46e9-8641-963815f5659c\") " Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.366308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities" (OuterVolumeSpecName: "utilities") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.366912 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.372597 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs" (OuterVolumeSpecName: "kube-api-access-2hwjs") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "kube-api-access-2hwjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.395804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18749304-4042-46e9-8641-963815f5659c" (UID: "18749304-4042-46e9-8641-963815f5659c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.469622 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18749304-4042-46e9-8641-963815f5659c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.469667 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwjs\" (UniqueName: \"kubernetes.io/projected/18749304-4042-46e9-8641-963815f5659c-kube-api-access-2hwjs\") on node \"crc\" DevicePath \"\"" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740368 4820 generic.go:334] "Generic (PLEG): container finished" podID="18749304-4042-46e9-8641-963815f5659c" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" exitCode=0 Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c67xc" event={"ID":"18749304-4042-46e9-8641-963815f5659c","Type":"ContainerDied","Data":"0f24824b62c5dcb7203393a1912dfcae426de62ec3168651c566197ab53cbe13"} Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.740463 4820 scope.go:117] "RemoveContainer" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.741041 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c67xc" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.778253 4820 scope.go:117] "RemoveContainer" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.787342 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.798528 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c67xc"] Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.802848 4820 scope.go:117] "RemoveContainer" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.853357 4820 scope.go:117] "RemoveContainer" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.858718 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": container with ID starting with 19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f not found: ID does not exist" containerID="19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.858781 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f"} err="failed to get container status \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": rpc error: code = NotFound desc = could not find container \"19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f\": container with ID starting with 19c948340d1f3e60b681557f1ddd75d62705fbc9bcfefc1ad6ecfb1e0133281f not found: ID does not exist" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.858814 4820 scope.go:117] "RemoveContainer" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.859386 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": container with ID starting with 77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7 not found: ID does not exist" containerID="77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.859461 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7"} err="failed to get container status \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": rpc error: code = NotFound desc = could not find container \"77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7\": container with ID starting with 77ffb0e7d04fe602bde4663ba8d5b3403a358ebe5d5be063b934c3480bab92c7 not found: ID does not exist" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.859489 4820 scope.go:117] "RemoveContainer" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: E0221 09:11:39.865732 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": container with ID starting with 3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef not found: ID does not exist" containerID="3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef" Feb 21 09:11:39 crc kubenswrapper[4820]: I0221 09:11:39.865971 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef"} err="failed to get container status \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": rpc error: code = NotFound desc = could not find container \"3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef\": container with ID starting with 3e28d02b24b16683e4a5244f423562e7971a517e6ca5aff5d0ba1c98560c75ef not found: ID does not exist" Feb 21 09:11:41 crc kubenswrapper[4820]: I0221 09:11:41.711738 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18749304-4042-46e9-8641-963815f5659c" path="/var/lib/kubelet/pods/18749304-4042-46e9-8641-963815f5659c/volumes" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.816403 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.817361 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.817437 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.818325 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:11:43 crc kubenswrapper[4820]: I0221 09:11:43.818388 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" gracePeriod=600 Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790446 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" exitCode=0 Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b"} Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790814 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} Feb 21 09:11:44 crc kubenswrapper[4820]: I0221 09:11:44.790835 4820 scope.go:117] "RemoveContainer" containerID="162c44d0258770b165af4bda23092a0cc20691dcb6147b818c017cc895336018" Feb 21 09:12:04 crc kubenswrapper[4820]: I0221 09:12:04.111791 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:04 crc kubenswrapper[4820]: I0221 09:12:04.113658 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" containerID="cri-o://9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" gracePeriod=30 Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.294678 4820 generic.go:334] "Generic (PLEG): container finished" podID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerID="9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" exitCode=137 Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.294772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerDied","Data":"9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49"} Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.658810 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.725377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") pod \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.725896 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") pod \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\" (UID: \"f51c53b3-c766-40db-ad65-5935f9fb3ee4\") " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.732559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn" (OuterVolumeSpecName: "kube-api-access-gczmn") pod "f51c53b3-c766-40db-ad65-5935f9fb3ee4" (UID: "f51c53b3-c766-40db-ad65-5935f9fb3ee4"). InnerVolumeSpecName "kube-api-access-gczmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.747513 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0" (OuterVolumeSpecName: "mariadb-data") pod "f51c53b3-c766-40db-ad65-5935f9fb3ee4" (UID: "f51c53b3-c766-40db-ad65-5935f9fb3ee4"). InnerVolumeSpecName "pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.829804 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczmn\" (UniqueName: \"kubernetes.io/projected/f51c53b3-c766-40db-ad65-5935f9fb3ee4-kube-api-access-gczmn\") on node \"crc\" DevicePath \"\"" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.829864 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") on node \"crc\" " Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.869924 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.870775 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0") on node "crc" Feb 21 09:12:34 crc kubenswrapper[4820]: I0221 09:12:34.931905 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-58a776a1-72a7-4202-bac7-6be2c820ffb0\") on node \"crc\" DevicePath \"\"" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f51c53b3-c766-40db-ad65-5935f9fb3ee4","Type":"ContainerDied","Data":"707475d0c6275ed4702ec4fee55d65d5c005c4843fb7b9c91608c48f928cd4c6"} Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307116 4820 scope.go:117] "RemoveContainer" containerID="9265ce156d963015ed9d0dc964122ef5cf17eb7532d8a20b6b597df27cc4af49" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.307131 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.351321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.364420 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.711190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" path="/var/lib/kubelet/pods/f51c53b3-c766-40db-ad65-5935f9fb3ee4/volumes" Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.917136 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:12:35 crc kubenswrapper[4820]: I0221 09:12:35.917620 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" containerID="cri-o://7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" gracePeriod=30 Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.479066 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606248 4820 generic.go:334] "Generic (PLEG): container finished" podID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" exitCode=137 Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606324 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.606322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerDied","Data":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.607065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"0aeb2e3c-2741-4cfb-ae99-d7f696b69490","Type":"ContainerDied","Data":"8dd551c3890db1e73ddd2531407ed1073b385c0ce262dc89304db8e225ef25b4"} Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.607126 4820 scope.go:117] "RemoveContainer" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.612489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.612683 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.613424 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") pod \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\" (UID: \"0aeb2e3c-2741-4cfb-ae99-d7f696b69490\") " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.619807 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk" (OuterVolumeSpecName: "kube-api-access-7gxxk") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "kube-api-access-7gxxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.622051 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.640263 4820 scope.go:117] "RemoveContainer" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: E0221 09:13:06.640931 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": container with ID starting with 7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8 not found: ID does not exist" containerID="7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.641003 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8"} err="failed to get container status \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": rpc error: code = NotFound desc = could not find container \"7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8\": container with ID starting with 7747ac6c77885af80b64601f404093f14c0fb94ed38b3d9cb1ceeac04679b3e8 not found: ID does not exist" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.657212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452" (OuterVolumeSpecName: "ovn-data") pod "0aeb2e3c-2741-4cfb-ae99-d7f696b69490" (UID: "0aeb2e3c-2741-4cfb-ae99-d7f696b69490"). InnerVolumeSpecName "pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.715919 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxxk\" (UniqueName: \"kubernetes.io/projected/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-kube-api-access-7gxxk\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.715962 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/0aeb2e3c-2741-4cfb-ae99-d7f696b69490-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.716013 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") on node \"crc\" " Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.741401 4820 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.741551 4820 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452") on node "crc" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.818399 4820 reconciler_common.go:293] "Volume detached for volume \"pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24f3d218-b61c-4ecd-ba3b-ef41fb1f1452\") on node \"crc\" DevicePath \"\"" Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.943603 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:13:06 crc kubenswrapper[4820]: I0221 09:13:06.954292 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Feb 21 09:13:07 crc kubenswrapper[4820]: I0221 09:13:07.718053 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" path="/var/lib/kubelet/pods/0aeb2e3c-2741-4cfb-ae99-d7f696b69490/volumes" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.935156 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936339 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936355 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936382 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-content" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936388 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-content" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936397 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936403 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936421 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936426 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: E0221 09:13:25.936442 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-utilities" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936447 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="extract-utilities" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936632 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aeb2e3c-2741-4cfb-ae99-d7f696b69490" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936646 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51c53b3-c766-40db-ad65-5935f9fb3ee4" containerName="adoption" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.936657 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="18749304-4042-46e9-8641-963815f5659c" containerName="registry-server" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.937458 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941005 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.941572 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.942338 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 21 09:13:25 crc kubenswrapper[4820]: I0221 09:13:25.948847 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053406 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.053864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156760 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156887 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.156991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157036 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157073 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157841 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.157900 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158043 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158895 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.158903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.165921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.166053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.166334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.178336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.190898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.258135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.742101 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 21 09:13:26 crc kubenswrapper[4820]: I0221 09:13:26.821397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerStarted","Data":"7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d"} Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.270560 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.271105 4820 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.271286 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(417782d7-a42e-4872-9e2d-0f11848812cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.272463 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" Feb 21 09:14:12 crc kubenswrapper[4820]: E0221 09:14:12.468984 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8419493e1fd846703d277695e03fc5eb\\\"\"" pod="openstack/tempest-tests-tempest" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" Feb 21 09:14:13 crc kubenswrapper[4820]: I0221 09:14:13.816830 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:14:13 crc kubenswrapper[4820]: I0221 09:14:13.816893 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:14:24 crc kubenswrapper[4820]: I0221 09:14:24.898001 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 21 09:14:26 crc kubenswrapper[4820]: I0221 09:14:26.589651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerStarted","Data":"ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344"} Feb 21 09:14:26 crc kubenswrapper[4820]: I0221 09:14:26.613608 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.470947165 podStartE2EDuration="1m2.613583661s" podCreationTimestamp="2026-02-21 09:13:24 +0000 UTC" firstStartedPulling="2026-02-21 09:13:26.752105331 +0000 UTC m=+8781.785189529" lastFinishedPulling="2026-02-21 09:14:24.894741827 +0000 UTC m=+8839.927826025" observedRunningTime="2026-02-21 09:14:26.606760555 +0000 UTC m=+8841.639844763" watchObservedRunningTime="2026-02-21 09:14:26.613583661 +0000 UTC m=+8841.646667859" Feb 21 09:14:43 crc kubenswrapper[4820]: I0221 09:14:43.817066 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:14:43 crc kubenswrapper[4820]: I0221 09:14:43.818298 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.153646 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.156076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.158628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.158667 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.162832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.300848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.403579 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.407745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.417789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.420363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"collect-profiles-29527755-jj5nm\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:00 crc kubenswrapper[4820]: I0221 09:15:00.482949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.012255 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm"] Feb 21 09:15:01 crc kubenswrapper[4820]: W0221 09:15:01.017088 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4 WatchSource:0}: Error finding container ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4: Status 404 returned error can't find the container with id ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4 Feb 21 09:15:01 crc kubenswrapper[4820]: E0221 09:15:01.943327 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-conmon-d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc2aafa_e2b9_427e_83a1_e9da552ad85e.slice/crio-d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e.scope\": RecentStats: unable to find data in memory cache]" Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974900 4820 generic.go:334] "Generic (PLEG): container finished" podID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerID="d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e" exitCode=0 Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerDied","Data":"d03a067a33606ca7b05b2c6b5da768e29afb4b085d65b4120c44864979a9f56e"} Feb 21 09:15:01 crc kubenswrapper[4820]: I0221 09:15:01.974968 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerStarted","Data":"ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4"} Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.422451 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.569747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.569979 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.570102 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") pod \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\" (UID: \"abc2aafa-e2b9-427e-83a1-e9da552ad85e\") " Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.570562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume" (OuterVolumeSpecName: "config-volume") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.571880 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abc2aafa-e2b9-427e-83a1-e9da552ad85e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.576677 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9" (OuterVolumeSpecName: "kube-api-access-hcmk9") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "kube-api-access-hcmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.577353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "abc2aafa-e2b9-427e-83a1-e9da552ad85e" (UID: "abc2aafa-e2b9-427e-83a1-e9da552ad85e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.674077 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcmk9\" (UniqueName: \"kubernetes.io/projected/abc2aafa-e2b9-427e-83a1-e9da552ad85e-kube-api-access-hcmk9\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.674147 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/abc2aafa-e2b9-427e-83a1-e9da552ad85e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991896 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" event={"ID":"abc2aafa-e2b9-427e-83a1-e9da552ad85e","Type":"ContainerDied","Data":"ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4"} Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991936 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab9a893a83bb61314a71849ee0aba7881cdcaf98262e7d2383396f6f36d74cc4" Feb 21 09:15:03 crc kubenswrapper[4820]: I0221 09:15:03.991997 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527755-jj5nm" Feb 21 09:15:04 crc kubenswrapper[4820]: I0221 09:15:04.495882 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 09:15:04 crc kubenswrapper[4820]: I0221 09:15:04.507436 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527710-h44zw"] Feb 21 09:15:05 crc kubenswrapper[4820]: I0221 09:15:05.725439 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7930cbc-54a2-4fed-8153-27bb0a44221d" path="/var/lib/kubelet/pods/b7930cbc-54a2-4fed-8153-27bb0a44221d/volumes" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.816644 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.817261 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.817307 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.818059 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:15:13 crc kubenswrapper[4820]: I0221 09:15:13.818108 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" gracePeriod=600 Feb 21 09:15:13 crc kubenswrapper[4820]: E0221 09:15:13.939649 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086259 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" exitCode=0 Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab"} Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.086346 4820 scope.go:117] "RemoveContainer" containerID="00f2ad976a303bdff7905db0927834eff5c6d0654e866e9f66c57afad544d05b" Feb 21 09:15:14 crc kubenswrapper[4820]: I0221 09:15:14.087017 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:14 crc kubenswrapper[4820]: E0221 09:15:14.087319 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:25 crc kubenswrapper[4820]: I0221 09:15:25.706060 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:25 crc kubenswrapper[4820]: E0221 09:15:25.707009 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:36 crc kubenswrapper[4820]: I0221 09:15:36.696444 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:36 crc kubenswrapper[4820]: E0221 09:15:36.697143 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:15:37 crc kubenswrapper[4820]: I0221 09:15:37.686893 4820 scope.go:117] "RemoveContainer" containerID="bedec9e828a462a9d7d9e96d01cf5a9452a72b80e424a2bc7656e332167d5caf" Feb 21 09:15:48 crc kubenswrapper[4820]: I0221 09:15:48.696792 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:15:48 crc kubenswrapper[4820]: E0221 09:15:48.697612 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:02 crc kubenswrapper[4820]: I0221 09:16:02.696914 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:02 crc kubenswrapper[4820]: E0221 09:16:02.697949 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.099873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:04 crc kubenswrapper[4820]: E0221 09:16:04.100686 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.100700 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.100930 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc2aafa-e2b9-427e-83a1-e9da552ad85e" containerName="collect-profiles" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.102357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.118463 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.222601 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.324712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.325000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.325273 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.696263 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"redhat-operators-2twvm\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:04 crc kubenswrapper[4820]: I0221 09:16:04.731334 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.251725 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548152 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" exitCode=0 Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a"} Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.548799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"5cb259e68b26b83bfc3d59361ebc5f021da97157c359d998893420becfd9ab1b"} Feb 21 09:16:05 crc kubenswrapper[4820]: I0221 09:16:05.549859 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:16:06 crc kubenswrapper[4820]: I0221 09:16:06.562987 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} Feb 21 09:16:12 crc kubenswrapper[4820]: I0221 09:16:12.613185 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" exitCode=0 Feb 21 09:16:12 crc kubenswrapper[4820]: I0221 09:16:12.613288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.347734 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.353326 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.361655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.518762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.622779 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.623698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.623787 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.654998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"community-operators-q5l92\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:13 crc kubenswrapper[4820]: I0221 09:16:13.688072 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.642854 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerStarted","Data":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.682019 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2twvm" podStartSLOduration=3.148724132 podStartE2EDuration="10.681997098s" podCreationTimestamp="2026-02-21 09:16:04 +0000 UTC" firstStartedPulling="2026-02-21 09:16:05.549621855 +0000 UTC m=+8940.582706053" lastFinishedPulling="2026-02-21 09:16:13.082894821 +0000 UTC m=+8948.115979019" observedRunningTime="2026-02-21 09:16:14.671694437 +0000 UTC m=+8949.704778635" watchObservedRunningTime="2026-02-21 09:16:14.681997098 +0000 UTC m=+8949.715081316" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.731980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.732022 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:14 crc kubenswrapper[4820]: I0221 09:16:14.888751 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652068 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" exitCode=0 Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c"} Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.652426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"5a45eccaabcee7d31231def09c3616e013b3c8ce6ec34ca7e3f63210dcbce64e"} Feb 21 09:16:15 crc kubenswrapper[4820]: I0221 09:16:15.789179 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:15 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:15 crc kubenswrapper[4820]: > Feb 21 09:16:17 crc kubenswrapper[4820]: I0221 09:16:17.682580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} Feb 21 09:16:17 crc kubenswrapper[4820]: I0221 09:16:17.697773 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:17 crc kubenswrapper[4820]: E0221 09:16:17.698091 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:24 crc kubenswrapper[4820]: I0221 09:16:24.136466 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" exitCode=0 Feb 21 09:16:24 crc kubenswrapper[4820]: I0221 09:16:24.136540 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.148646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerStarted","Data":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.175364 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5l92" podStartSLOduration=3.284537698 podStartE2EDuration="12.175343222s" podCreationTimestamp="2026-02-21 09:16:13 +0000 UTC" firstStartedPulling="2026-02-21 09:16:15.655333337 +0000 UTC m=+8950.688417535" lastFinishedPulling="2026-02-21 09:16:24.546138851 +0000 UTC m=+8959.579223059" observedRunningTime="2026-02-21 09:16:25.167157949 +0000 UTC m=+8960.200242167" watchObservedRunningTime="2026-02-21 09:16:25.175343222 +0000 UTC m=+8960.208427420" Feb 21 09:16:25 crc kubenswrapper[4820]: I0221 09:16:25.793518 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:25 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:25 crc kubenswrapper[4820]: > Feb 21 09:16:29 crc kubenswrapper[4820]: I0221 09:16:29.699583 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:29 crc kubenswrapper[4820]: E0221 09:16:29.699976 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.689591 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.690142 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:33 crc kubenswrapper[4820]: I0221 09:16:33.749140 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:34 crc kubenswrapper[4820]: I0221 09:16:34.348511 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:34 crc kubenswrapper[4820]: I0221 09:16:34.984461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:35 crc kubenswrapper[4820]: I0221 09:16:35.781529 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:35 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:35 crc kubenswrapper[4820]: > Feb 21 09:16:36 crc kubenswrapper[4820]: I0221 09:16:36.247463 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5l92" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" containerID="cri-o://a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" gracePeriod=2 Feb 21 09:16:36 crc kubenswrapper[4820]: I0221 09:16:36.939620 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090113 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.090233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") pod \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\" (UID: \"4e3fb2aa-800a-409e-b230-cb71f1276c7b\") " Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.091331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities" (OuterVolumeSpecName: "utilities") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.156928 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.192839 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.192872 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3fb2aa-800a-409e-b230-cb71f1276c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259117 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" exitCode=0 Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259172 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5l92" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259211 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5l92" event={"ID":"4e3fb2aa-800a-409e-b230-cb71f1276c7b","Type":"ContainerDied","Data":"5a45eccaabcee7d31231def09c3616e013b3c8ce6ec34ca7e3f63210dcbce64e"} Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.259250 4820 scope.go:117] "RemoveContainer" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.282167 4820 scope.go:117] "RemoveContainer" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.591454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl" (OuterVolumeSpecName: "kube-api-access-w2cnl") pod "4e3fb2aa-800a-409e-b230-cb71f1276c7b" (UID: "4e3fb2aa-800a-409e-b230-cb71f1276c7b"). InnerVolumeSpecName "kube-api-access-w2cnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.601553 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cnl\" (UniqueName: \"kubernetes.io/projected/4e3fb2aa-800a-409e-b230-cb71f1276c7b-kube-api-access-w2cnl\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.607658 4820 scope.go:117] "RemoveContainer" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.696061 4820 scope.go:117] "RemoveContainer" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.697420 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": container with ID starting with a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af not found: ID does not exist" containerID="a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697457 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af"} err="failed to get container status \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": rpc error: code = NotFound desc = could not find container \"a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af\": container with ID starting with a2c8067619801dab38e05d3b04e4bcc878b3f264e80c59d74b971f8ffcd3b0af not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697478 4820 scope.go:117] "RemoveContainer" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.697872 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": container with ID starting with 8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35 not found: ID does not exist" containerID="8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697895 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35"} err="failed to get container status \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": rpc error: code = NotFound desc = could not find container \"8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35\": container with ID starting with 8abed342a3275169d6bdb7eef4b17fc64308b1fed502d5f344e37978a0f65e35 not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.697908 4820 scope.go:117] "RemoveContainer" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: E0221 09:16:37.698455 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": container with ID starting with dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c not found: ID does not exist" containerID="dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.698509 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c"} err="failed to get container status \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": rpc error: code = NotFound desc = could not find container \"dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c\": container with ID starting with dcab43085e852fe80f00c5affd63908e8b82b34227d3ab59d5ad1835c357f78c not found: ID does not exist" Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.887106 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:37 crc kubenswrapper[4820]: I0221 09:16:37.897470 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5l92"] Feb 21 09:16:39 crc kubenswrapper[4820]: I0221 09:16:39.710532 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" path="/var/lib/kubelet/pods/4e3fb2aa-800a-409e-b230-cb71f1276c7b/volumes" Feb 21 09:16:43 crc kubenswrapper[4820]: I0221 09:16:43.697077 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:43 crc kubenswrapper[4820]: E0221 09:16:43.698079 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:45 crc kubenswrapper[4820]: I0221 09:16:45.776716 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" probeResult="failure" output=< Feb 21 09:16:45 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:16:45 crc kubenswrapper[4820]: > Feb 21 09:16:54 crc kubenswrapper[4820]: I0221 09:16:54.780616 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:54 crc kubenswrapper[4820]: I0221 09:16:54.832290 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:55 crc kubenswrapper[4820]: I0221 09:16:55.021286 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:56 crc kubenswrapper[4820]: I0221 09:16:56.419529 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2twvm" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" containerID="cri-o://fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" gracePeriod=2 Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.047760 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209300 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209445 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.209617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") pod \"cca25b39-a0d0-4ca2-9000-9f888a196bab\" (UID: \"cca25b39-a0d0-4ca2-9000-9f888a196bab\") " Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.210387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities" (OuterVolumeSpecName: "utilities") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.214995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp" (OuterVolumeSpecName: "kube-api-access-hzhdp") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "kube-api-access-hzhdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.312300 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzhdp\" (UniqueName: \"kubernetes.io/projected/cca25b39-a0d0-4ca2-9000-9f888a196bab-kube-api-access-hzhdp\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.312332 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.338790 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cca25b39-a0d0-4ca2-9000-9f888a196bab" (UID: "cca25b39-a0d0-4ca2-9000-9f888a196bab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.414091 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca25b39-a0d0-4ca2-9000-9f888a196bab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428704 4820 generic.go:334] "Generic (PLEG): container finished" podID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" exitCode=0 Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428771 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2twvm" event={"ID":"cca25b39-a0d0-4ca2-9000-9f888a196bab","Type":"ContainerDied","Data":"5cb259e68b26b83bfc3d59361ebc5f021da97157c359d998893420becfd9ab1b"} Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428785 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2twvm" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.428788 4820 scope.go:117] "RemoveContainer" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.456076 4820 scope.go:117] "RemoveContainer" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.462868 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.471218 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2twvm"] Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.697389 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:16:57 crc kubenswrapper[4820]: E0221 09:16:57.697793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:16:57 crc kubenswrapper[4820]: I0221 09:16:57.710222 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" path="/var/lib/kubelet/pods/cca25b39-a0d0-4ca2-9000-9f888a196bab/volumes" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.014907 4820 scope.go:117] "RemoveContainer" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.062010 4820 scope.go:117] "RemoveContainer" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.063539 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": container with ID starting with fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357 not found: ID does not exist" containerID="fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.063593 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357"} err="failed to get container status \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": rpc error: code = NotFound desc = could not find container \"fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357\": container with ID starting with fdc6daa4a7796e203b8e2d4c2b5c1951c63452909483c419ba6883b09e6e5357 not found: ID does not exist" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.063623 4820 scope.go:117] "RemoveContainer" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.064228 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": container with ID starting with a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635 not found: ID does not exist" containerID="a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064278 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635"} err="failed to get container status \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": rpc error: code = NotFound desc = could not find container \"a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635\": container with ID starting with a3c204986b5f928424a0726a4d483a65ef2a47c955caae18cfbf42cdfaa71635 not found: ID does not exist" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064299 4820 scope.go:117] "RemoveContainer" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: E0221 09:16:58.064548 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": container with ID starting with 375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a not found: ID does not exist" containerID="375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a" Feb 21 09:16:58 crc kubenswrapper[4820]: I0221 09:16:58.064583 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a"} err="failed to get container status \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": rpc error: code = NotFound desc = could not find container \"375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a\": container with ID starting with 375892ca0c36d3142d889eb8a1d9cca256f98ba29c578944f5047f6c7c433b0a not found: ID does not exist" Feb 21 09:17:11 crc kubenswrapper[4820]: I0221 09:17:11.696133 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:11 crc kubenswrapper[4820]: E0221 09:17:11.697054 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:25 crc kubenswrapper[4820]: I0221 09:17:25.702566 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:25 crc kubenswrapper[4820]: E0221 09:17:25.703352 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:39 crc kubenswrapper[4820]: I0221 09:17:39.698230 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:39 crc kubenswrapper[4820]: E0221 09:17:39.699083 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:17:51 crc kubenswrapper[4820]: I0221 09:17:51.697396 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:17:51 crc kubenswrapper[4820]: E0221 09:17:51.698367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:02 crc kubenswrapper[4820]: I0221 09:18:02.696717 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:02 crc kubenswrapper[4820]: E0221 09:18:02.697411 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:17 crc kubenswrapper[4820]: I0221 09:18:17.696859 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:17 crc kubenswrapper[4820]: E0221 09:18:17.697924 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:28 crc kubenswrapper[4820]: I0221 09:18:28.697191 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:28 crc kubenswrapper[4820]: E0221 09:18:28.698508 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:41 crc kubenswrapper[4820]: I0221 09:18:41.696560 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:41 crc kubenswrapper[4820]: E0221 09:18:41.697261 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:18:55 crc kubenswrapper[4820]: I0221 09:18:55.702805 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:18:55 crc kubenswrapper[4820]: E0221 09:18:55.703557 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:09 crc kubenswrapper[4820]: I0221 09:19:09.696815 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:09 crc kubenswrapper[4820]: E0221 09:19:09.697679 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:23 crc kubenswrapper[4820]: I0221 09:19:23.697330 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:23 crc kubenswrapper[4820]: E0221 09:19:23.698435 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:34 crc kubenswrapper[4820]: I0221 09:19:34.697895 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:34 crc kubenswrapper[4820]: E0221 09:19:34.698695 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:45 crc kubenswrapper[4820]: I0221 09:19:45.708655 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:45 crc kubenswrapper[4820]: E0221 09:19:45.710358 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:19:58 crc kubenswrapper[4820]: I0221 09:19:58.697271 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:19:58 crc kubenswrapper[4820]: E0221 09:19:58.698035 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:20:13 crc kubenswrapper[4820]: I0221 09:20:13.696934 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:20:13 crc kubenswrapper[4820]: E0221 09:20:13.697764 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:20:28 crc kubenswrapper[4820]: I0221 09:20:28.697532 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:20:29 crc kubenswrapper[4820]: I0221 09:20:29.533668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.056819 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066843 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066876 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066917 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066927 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.066963 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.066973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067016 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067026 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-utilities" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067157 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067168 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="extract-content" Feb 21 09:20:46 crc kubenswrapper[4820]: E0221 09:20:46.067196 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067205 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067777 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3fb2aa-800a-409e-b230-cb71f1276c7b" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.067826 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca25b39-a0d0-4ca2-9000-9f888a196bab" containerName="registry-server" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.088763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.088776 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.170253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.271997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.272101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.272123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.273018 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.273094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.698821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"certified-operators-58zfg\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:46 crc kubenswrapper[4820]: I0221 09:20:46.719196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.202522 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869036 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" exitCode=0 Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b"} Feb 21 09:20:47 crc kubenswrapper[4820]: I0221 09:20:47.869478 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"cc4e7f34da8593b16ce90f2bd7fafb03e884320aa86a31355dbc2e5db6b65df2"} Feb 21 09:20:48 crc kubenswrapper[4820]: I0221 09:20:48.879934 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} Feb 21 09:20:50 crc kubenswrapper[4820]: I0221 09:20:50.904524 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" exitCode=0 Feb 21 09:20:50 crc kubenswrapper[4820]: I0221 09:20:50.904627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} Feb 21 09:20:51 crc kubenswrapper[4820]: I0221 09:20:51.926252 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerStarted","Data":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} Feb 21 09:20:51 crc kubenswrapper[4820]: I0221 09:20:51.955531 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58zfg" podStartSLOduration=2.552876756 podStartE2EDuration="5.955515014s" podCreationTimestamp="2026-02-21 09:20:46 +0000 UTC" firstStartedPulling="2026-02-21 09:20:47.871257142 +0000 UTC m=+9222.904341340" lastFinishedPulling="2026-02-21 09:20:51.2738954 +0000 UTC m=+9226.306979598" observedRunningTime="2026-02-21 09:20:51.952958884 +0000 UTC m=+9226.986043082" watchObservedRunningTime="2026-02-21 09:20:51.955515014 +0000 UTC m=+9226.988599212" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.720133 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.720720 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:56 crc kubenswrapper[4820]: I0221 09:20:56.767339 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:57 crc kubenswrapper[4820]: I0221 09:20:57.021949 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:57 crc kubenswrapper[4820]: I0221 09:20:57.079983 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:20:58 crc kubenswrapper[4820]: I0221 09:20:58.982707 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58zfg" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" containerID="cri-o://f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" gracePeriod=2 Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.656435 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.738743 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") pod \"e753b5e0-247c-41c3-b7b1-d0b10a067153\" (UID: \"e753b5e0-247c-41c3-b7b1-d0b10a067153\") " Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.740333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities" (OuterVolumeSpecName: "utilities") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.743864 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt" (OuterVolumeSpecName: "kube-api-access-bnwnt") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "kube-api-access-bnwnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.795320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e753b5e0-247c-41c3-b7b1-d0b10a067153" (UID: "e753b5e0-247c-41c3-b7b1-d0b10a067153"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841120 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841149 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e753b5e0-247c-41c3-b7b1-d0b10a067153-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.841159 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnwnt\" (UniqueName: \"kubernetes.io/projected/e753b5e0-247c-41c3-b7b1-d0b10a067153-kube-api-access-bnwnt\") on node \"crc\" DevicePath \"\"" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995187 4820 generic.go:334] "Generic (PLEG): container finished" podID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" exitCode=0 Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58zfg" event={"ID":"e753b5e0-247c-41c3-b7b1-d0b10a067153","Type":"ContainerDied","Data":"cc4e7f34da8593b16ce90f2bd7fafb03e884320aa86a31355dbc2e5db6b65df2"} Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995298 4820 scope.go:117] "RemoveContainer" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:20:59 crc kubenswrapper[4820]: I0221 09:20:59.995500 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58zfg" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.038029 4820 scope.go:117] "RemoveContainer" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.046389 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.058511 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58zfg"] Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.060224 4820 scope.go:117] "RemoveContainer" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119050 4820 scope.go:117] "RemoveContainer" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.119774 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": container with ID starting with f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5 not found: ID does not exist" containerID="f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119813 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5"} err="failed to get container status \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": rpc error: code = NotFound desc = could not find container \"f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5\": container with ID starting with f34609ade0c41f5c6e8a9e827dd60871ec9ecf622eb9039e265aa595c96aefc5 not found: ID does not exist" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.119838 4820 scope.go:117] "RemoveContainer" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.120284 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": container with ID starting with 80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb not found: ID does not exist" containerID="80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120320 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb"} err="failed to get container status \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": rpc error: code = NotFound desc = could not find container \"80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb\": container with ID starting with 80d7df308c1cddd052ff3342cd93ebb357af79077ca2036dda14cebf143980fb not found: ID does not exist" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120341 4820 scope.go:117] "RemoveContainer" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: E0221 09:21:00.120569 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": container with ID starting with cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b not found: ID does not exist" containerID="cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b" Feb 21 09:21:00 crc kubenswrapper[4820]: I0221 09:21:00.120589 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b"} err="failed to get container status \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": rpc error: code = NotFound desc = could not find container \"cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b\": container with ID starting with cd7ed2e49e6af702d2de53182e847b1562314b8c62fb7ee177eb10084632013b not found: ID does not exist" Feb 21 09:21:01 crc kubenswrapper[4820]: I0221 09:21:01.709996 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" path="/var/lib/kubelet/pods/e753b5e0-247c-41c3-b7b1-d0b10a067153/volumes" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.026780 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027768 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-utilities" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027782 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-utilities" Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027797 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: E0221 09:21:42.027807 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-content" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.027813 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="extract-content" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.028033 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e753b5e0-247c-41c3-b7b1-d0b10a067153" containerName="registry-server" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.029794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.076874 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.232336 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.233608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.233917 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335741 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.335863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.353259 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"redhat-marketplace-gb7ng\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.373402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:42 crc kubenswrapper[4820]: I0221 09:21:42.917081 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411082 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" exitCode=0 Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a"} Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.411429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"c5b9b2036cb79cdd9ab9e450798a60fdb109f05ea61c9d939fe1251a5af2e168"} Feb 21 09:21:43 crc kubenswrapper[4820]: I0221 09:21:43.412817 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:21:44 crc kubenswrapper[4820]: I0221 09:21:44.422097 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} Feb 21 09:21:45 crc kubenswrapper[4820]: I0221 09:21:45.434795 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" exitCode=0 Feb 21 09:21:45 crc kubenswrapper[4820]: I0221 09:21:45.434908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} Feb 21 09:21:47 crc kubenswrapper[4820]: I0221 09:21:47.461668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerStarted","Data":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} Feb 21 09:21:47 crc kubenswrapper[4820]: I0221 09:21:47.504705 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gb7ng" podStartSLOduration=3.077181143 podStartE2EDuration="5.504682673s" podCreationTimestamp="2026-02-21 09:21:42 +0000 UTC" firstStartedPulling="2026-02-21 09:21:43.412627718 +0000 UTC m=+9278.445711916" lastFinishedPulling="2026-02-21 09:21:45.840129248 +0000 UTC m=+9280.873213446" observedRunningTime="2026-02-21 09:21:47.492503211 +0000 UTC m=+9282.525587419" watchObservedRunningTime="2026-02-21 09:21:47.504682673 +0000 UTC m=+9282.537766871" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.373540 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.375587 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.437221 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.550842 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:52 crc kubenswrapper[4820]: I0221 09:21:52.683050 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:54 crc kubenswrapper[4820]: I0221 09:21:54.520356 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gb7ng" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" containerID="cri-o://204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" gracePeriod=2 Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.099565 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.197680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") pod \"32781487-aa7d-4011-9901-2f3e852902fc\" (UID: \"32781487-aa7d-4011-9901-2f3e852902fc\") " Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.198362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities" (OuterVolumeSpecName: "utilities") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.205014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c" (OuterVolumeSpecName: "kube-api-access-btn2c") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "kube-api-access-btn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.221532 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32781487-aa7d-4011-9901-2f3e852902fc" (UID: "32781487-aa7d-4011-9901-2f3e852902fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300563 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300820 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btn2c\" (UniqueName: \"kubernetes.io/projected/32781487-aa7d-4011-9901-2f3e852902fc-kube-api-access-btn2c\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.300830 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32781487-aa7d-4011-9901-2f3e852902fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537542 4820 generic.go:334] "Generic (PLEG): container finished" podID="32781487-aa7d-4011-9901-2f3e852902fc" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" exitCode=0 Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gb7ng" event={"ID":"32781487-aa7d-4011-9901-2f3e852902fc","Type":"ContainerDied","Data":"c5b9b2036cb79cdd9ab9e450798a60fdb109f05ea61c9d939fe1251a5af2e168"} Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.537680 4820 scope.go:117] "RemoveContainer" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.538116 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gb7ng" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.589919 4820 scope.go:117] "RemoveContainer" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.591225 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.605152 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gb7ng"] Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.621682 4820 scope.go:117] "RemoveContainer" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665114 4820 scope.go:117] "RemoveContainer" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.665640 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": container with ID starting with 204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209 not found: ID does not exist" containerID="204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665679 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209"} err="failed to get container status \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": rpc error: code = NotFound desc = could not find container \"204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209\": container with ID starting with 204cb37a79ad6077b69bd2afae445478142f70a13fa09de41424ba0111c7e209 not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.665704 4820 scope.go:117] "RemoveContainer" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.666623 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": container with ID starting with 288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6 not found: ID does not exist" containerID="288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.666725 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6"} err="failed to get container status \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": rpc error: code = NotFound desc = could not find container \"288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6\": container with ID starting with 288e08846f2156336ca3af1dd719f347b8fbb3ee251b98948cbfc0c699cda0d6 not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.666780 4820 scope.go:117] "RemoveContainer" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: E0221 09:21:55.667298 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": container with ID starting with 2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a not found: ID does not exist" containerID="2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.667369 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a"} err="failed to get container status \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": rpc error: code = NotFound desc = could not find container \"2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a\": container with ID starting with 2b3b6b3d3a63635cf8d360f437f4e325d9683acd9eeecf3eaa06c0ee45dd7e8a not found: ID does not exist" Feb 21 09:21:55 crc kubenswrapper[4820]: I0221 09:21:55.713971 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32781487-aa7d-4011-9901-2f3e852902fc" path="/var/lib/kubelet/pods/32781487-aa7d-4011-9901-2f3e852902fc/volumes" Feb 21 09:22:43 crc kubenswrapper[4820]: I0221 09:22:43.816798 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:22:43 crc kubenswrapper[4820]: I0221 09:22:43.817770 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:13 crc kubenswrapper[4820]: I0221 09:23:13.816828 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:23:13 crc kubenswrapper[4820]: I0221 09:23:13.817467 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816044 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816565 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.816836 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.817684 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:23:43 crc kubenswrapper[4820]: I0221 09:23:43.817742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" gracePeriod=600 Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634420 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" exitCode=0 Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634544 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b"} Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} Feb 21 09:23:44 crc kubenswrapper[4820]: I0221 09:23:44.634776 4820 scope.go:117] "RemoveContainer" containerID="f7a78b51c96728c16ce3ef64cc32dbec13d6efd822c8b0a2b992f30ad47d06ab" Feb 21 09:24:38 crc kubenswrapper[4820]: I0221 09:24:38.415770 4820 generic.go:334] "Generic (PLEG): container finished" podID="417782d7-a42e-4872-9e2d-0f11848812cd" containerID="ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344" exitCode=0 Feb 21 09:24:38 crc kubenswrapper[4820]: I0221 09:24:38.415869 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerDied","Data":"ab6120679bb44c551ad880ba2cc6a7b2086118bf8f465825a332cb5176e9c344"} Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.871768 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.873971 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874292 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874447 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874513 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874578 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874682 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.874783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") pod \"417782d7-a42e-4872-9e2d-0f11848812cd\" (UID: \"417782d7-a42e-4872-9e2d-0f11848812cd\") " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.875848 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.876222 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.878009 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data" (OuterVolumeSpecName: "config-data") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.879976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.885472 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg" (OuterVolumeSpecName: "kube-api-access-jwptg") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "kube-api-access-jwptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.885571 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.908985 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.913143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.929230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.961943 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "417782d7-a42e-4872-9e2d-0f11848812cd" (UID: "417782d7-a42e-4872-9e2d-0f11848812cd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978034 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978069 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/417782d7-a42e-4872-9e2d-0f11848812cd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978080 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978111 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978121 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwptg\" (UniqueName: \"kubernetes.io/projected/417782d7-a42e-4872-9e2d-0f11848812cd-kube-api-access-jwptg\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978130 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978140 4820 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:39 crc kubenswrapper[4820]: I0221 09:24:39.978148 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/417782d7-a42e-4872-9e2d-0f11848812cd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.000557 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.079570 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.465742 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.466336 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"417782d7-a42e-4872-9e2d-0f11848812cd","Type":"ContainerDied","Data":"7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d"} Feb 21 09:24:40 crc kubenswrapper[4820]: I0221 09:24:40.466390 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5ff066704f3e33a6c2fdd8d04c5c80690a10bd358cca7bf49443c234af864d" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.742507 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743467 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743484 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743503 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743511 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743526 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-content" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743534 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-content" Feb 21 09:24:49 crc kubenswrapper[4820]: E0221 09:24:49.743551 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-utilities" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743559 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="extract-utilities" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743805 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="417782d7-a42e-4872-9e2d-0f11848812cd" containerName="tempest-tests-tempest-tests-runner" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.743828 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="32781487-aa7d-4011-9901-2f3e852902fc" containerName="registry-server" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.744673 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.747222 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ccs7x" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.755089 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.882348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.882500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:49 crc kubenswrapper[4820]: I0221 09:24:49.986975 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.010641 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrdz\" (UniqueName: \"kubernetes.io/projected/fe061f11-8b08-455e-856c-ac81ff40d655-kube-api-access-lnrdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.032256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fe061f11-8b08-455e-856c-ac81ff40d655\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.078560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.516655 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 21 09:24:50 crc kubenswrapper[4820]: I0221 09:24:50.576952 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fe061f11-8b08-455e-856c-ac81ff40d655","Type":"ContainerStarted","Data":"a5dbd82e616fdc4fd12a1549520390a0c44b537a8b2a12bc309335a3d842f8e2"} Feb 21 09:24:51 crc kubenswrapper[4820]: I0221 09:24:51.590050 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fe061f11-8b08-455e-856c-ac81ff40d655","Type":"ContainerStarted","Data":"2156cd57541c7a07479a2cfd0914a51b3444204c2c6a7eeebff0a3ac3dbc0405"} Feb 21 09:24:51 crc kubenswrapper[4820]: I0221 09:24:51.616069 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.774489695 podStartE2EDuration="2.61605332s" podCreationTimestamp="2026-02-21 09:24:49 +0000 UTC" firstStartedPulling="2026-02-21 09:24:50.526041357 +0000 UTC m=+9465.559125555" lastFinishedPulling="2026-02-21 09:24:51.367604982 +0000 UTC m=+9466.400689180" observedRunningTime="2026-02-21 09:24:51.611796565 +0000 UTC m=+9466.644880763" watchObservedRunningTime="2026-02-21 09:24:51.61605332 +0000 UTC m=+9466.649137508" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.040757 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.047037 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.050465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j7d4d"/"kube-root-ca.crt" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.050992 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j7d4d"/"openshift-service-ca.crt" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.057332 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-j7d4d"/"default-dockercfg-rxswn" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.069093 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.162074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.162326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.263977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.282860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"must-gather-qvcwh\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:56 crc kubenswrapper[4820]: I0221 09:25:56.382914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:25:57 crc kubenswrapper[4820]: I0221 09:25:57.451535 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:25:58 crc kubenswrapper[4820]: I0221 09:25:58.238760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"b9cfef5032282ef1236e5e565e8e043337b305d7782be455af5d55dd83ba79e9"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.321861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.322424 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerStarted","Data":"4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f"} Feb 21 09:26:04 crc kubenswrapper[4820]: I0221 09:26:04.340894 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" podStartSLOduration=3.142390143 podStartE2EDuration="9.340871694s" podCreationTimestamp="2026-02-21 09:25:55 +0000 UTC" firstStartedPulling="2026-02-21 09:25:57.455002092 +0000 UTC m=+9532.488086290" lastFinishedPulling="2026-02-21 09:26:03.653483643 +0000 UTC m=+9538.686567841" observedRunningTime="2026-02-21 09:26:04.336622478 +0000 UTC m=+9539.369706676" watchObservedRunningTime="2026-02-21 09:26:04.340871694 +0000 UTC m=+9539.373955892" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.674200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.676402 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.721939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.722018 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.830659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:07 crc kubenswrapper[4820]: I0221 09:26:07.853744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"crc-debug-kxw2c\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:08 crc kubenswrapper[4820]: I0221 09:26:08.002150 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:26:08 crc kubenswrapper[4820]: W0221 09:26:08.051870 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564eff3b_59b2_4e16_af32_03335f96da2f.slice/crio-efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864 WatchSource:0}: Error finding container efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864: Status 404 returned error can't find the container with id efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864 Feb 21 09:26:08 crc kubenswrapper[4820]: I0221 09:26:08.369542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerStarted","Data":"efc380030d1820246e5f26755d01a041d24299fd8a7727a2acb6409fdb6ed864"} Feb 21 09:26:13 crc kubenswrapper[4820]: I0221 09:26:13.815917 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:26:13 crc kubenswrapper[4820]: I0221 09:26:13.816390 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:26:18 crc kubenswrapper[4820]: I0221 09:26:18.468638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerStarted","Data":"26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97"} Feb 21 09:26:18 crc kubenswrapper[4820]: I0221 09:26:18.490666 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" podStartSLOduration=2.17072135 podStartE2EDuration="11.490632579s" podCreationTimestamp="2026-02-21 09:26:07 +0000 UTC" firstStartedPulling="2026-02-21 09:26:08.053827385 +0000 UTC m=+9543.086911583" lastFinishedPulling="2026-02-21 09:26:17.373738614 +0000 UTC m=+9552.406822812" observedRunningTime="2026-02-21 09:26:18.486386384 +0000 UTC m=+9553.519470582" watchObservedRunningTime="2026-02-21 09:26:18.490632579 +0000 UTC m=+9553.523731697" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.088108 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.091499 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.112408 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.264552 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.366629 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367785 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.367931 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.386674 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"community-operators-vsvw6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:39 crc kubenswrapper[4820]: I0221 09:26:39.419802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.072038 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700326 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750" exitCode=0 Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750"} Feb 21 09:26:40 crc kubenswrapper[4820]: I0221 09:26:40.700878 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"5994a148d0ba6fc8e753c05577c030b45c9f3f4db18a5b2394bf8ac0120b2fb0"} Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.353585 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.356330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.365432 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.520687 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.755525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.756478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:41 crc kubenswrapper[4820]: I0221 09:26:41.756775 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.425637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"redhat-operators-dnrdx\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.588154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:42 crc kubenswrapper[4820]: I0221 09:26:42.892743 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.143147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.815773 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.816137 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.904932 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5" exitCode=0 Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.904989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.907173 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910534 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" exitCode=0 Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702"} Feb 21 09:26:43 crc kubenswrapper[4820]: I0221 09:26:43.910602 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"6e7c8195813b95501a26d888a217c17c6d993ce9063ff83f248bfb25d3f41950"} Feb 21 09:26:44 crc kubenswrapper[4820]: I0221 09:26:44.923836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerStarted","Data":"591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713"} Feb 21 09:26:44 crc kubenswrapper[4820]: I0221 09:26:44.951311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsvw6" podStartSLOduration=2.371411252 podStartE2EDuration="5.951291613s" podCreationTimestamp="2026-02-21 09:26:39 +0000 UTC" firstStartedPulling="2026-02-21 09:26:40.703952863 +0000 UTC m=+9575.737037061" lastFinishedPulling="2026-02-21 09:26:44.283833214 +0000 UTC m=+9579.316917422" observedRunningTime="2026-02-21 09:26:44.940454798 +0000 UTC m=+9579.973539006" watchObservedRunningTime="2026-02-21 09:26:44.951291613 +0000 UTC m=+9579.984375811" Feb 21 09:26:45 crc kubenswrapper[4820]: I0221 09:26:45.935975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} Feb 21 09:26:48 crc kubenswrapper[4820]: I0221 09:26:48.965684 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" exitCode=0 Feb 21 09:26:48 crc kubenswrapper[4820]: I0221 09:26:48.965732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.420385 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.420711 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.489797 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:49 crc kubenswrapper[4820]: I0221 09:26:49.977663 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerStarted","Data":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.035320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.057632 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnrdx" podStartSLOduration=3.646077359 podStartE2EDuration="9.05761096s" podCreationTimestamp="2026-02-21 09:26:41 +0000 UTC" firstStartedPulling="2026-02-21 09:26:43.912522323 +0000 UTC m=+9578.945606521" lastFinishedPulling="2026-02-21 09:26:49.324055924 +0000 UTC m=+9584.357140122" observedRunningTime="2026-02-21 09:26:50.000658041 +0000 UTC m=+9585.033742259" watchObservedRunningTime="2026-02-21 09:26:50.05761096 +0000 UTC m=+9585.090695158" Feb 21 09:26:50 crc kubenswrapper[4820]: I0221 09:26:50.876388 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:51 crc kubenswrapper[4820]: I0221 09:26:51.992870 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vsvw6" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" containerID="cri-o://591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" gracePeriod=2 Feb 21 09:26:52 crc kubenswrapper[4820]: I0221 09:26:52.589415 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:52 crc kubenswrapper[4820]: I0221 09:26:52.589465 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:26:53 crc kubenswrapper[4820]: I0221 09:26:53.644010 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dnrdx" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" probeResult="failure" output=< Feb 21 09:26:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:26:53 crc kubenswrapper[4820]: > Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.024596 4820 generic.go:334] "Generic (PLEG): container finished" podID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerID="591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" exitCode=0 Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.024773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713"} Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.798414 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823278 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.823409 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") pod \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\" (UID: \"20fccb1d-70aa-48f7-a4e6-79411b1641f6\") " Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.824323 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities" (OuterVolumeSpecName: "utilities") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.848467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x" (OuterVolumeSpecName: "kube-api-access-cfn7x") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "kube-api-access-cfn7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.891723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20fccb1d-70aa-48f7-a4e6-79411b1641f6" (UID: "20fccb1d-70aa-48f7-a4e6-79411b1641f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926224 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfn7x\" (UniqueName: \"kubernetes.io/projected/20fccb1d-70aa-48f7-a4e6-79411b1641f6-kube-api-access-cfn7x\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926270 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:55 crc kubenswrapper[4820]: I0221 09:26:55.926280 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fccb1d-70aa-48f7-a4e6-79411b1641f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsvw6" event={"ID":"20fccb1d-70aa-48f7-a4e6-79411b1641f6","Type":"ContainerDied","Data":"5994a148d0ba6fc8e753c05577c030b45c9f3f4db18a5b2394bf8ac0120b2fb0"} Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045572 4820 scope.go:117] "RemoveContainer" containerID="591d47b6af5411e9300cb469c2baa6c99f76ced3819a371e6f9015a6aeb70713" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.045593 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsvw6" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.079080 4820 scope.go:117] "RemoveContainer" containerID="4672f03b848f9034817a5956a566519966c81d9baaf16e7086f90cb38cc2bfc5" Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.098699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.110430 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vsvw6"] Feb 21 09:26:56 crc kubenswrapper[4820]: I0221 09:26:56.199437 4820 scope.go:117] "RemoveContainer" containerID="39d2f75cf2e8ea82cea4326b00c53a8b4b8c6ce1687eb4a6a0abb049bcba2750" Feb 21 09:26:57 crc kubenswrapper[4820]: I0221 09:26:57.707919 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" path="/var/lib/kubelet/pods/20fccb1d-70aa-48f7-a4e6-79411b1641f6/volumes" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.656724 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.704502 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:02 crc kubenswrapper[4820]: I0221 09:27:02.894257 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.117654 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnrdx" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" containerID="cri-o://171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" gracePeriod=2 Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.596468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700700 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.700895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") pod \"762a9a96-5afb-4798-9e70-7f385fe215ba\" (UID: \"762a9a96-5afb-4798-9e70-7f385fe215ba\") " Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.701328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities" (OuterVolumeSpecName: "utilities") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.701850 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.704920 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p" (OuterVolumeSpecName: "kube-api-access-t2r8p") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "kube-api-access-t2r8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.804045 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2r8p\" (UniqueName: \"kubernetes.io/projected/762a9a96-5afb-4798-9e70-7f385fe215ba-kube-api-access-t2r8p\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.818055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "762a9a96-5afb-4798-9e70-7f385fe215ba" (UID: "762a9a96-5afb-4798-9e70-7f385fe215ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:27:04 crc kubenswrapper[4820]: I0221 09:27:04.909302 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762a9a96-5afb-4798-9e70-7f385fe215ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132103 4820 generic.go:334] "Generic (PLEG): container finished" podID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" exitCode=0 Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnrdx" event={"ID":"762a9a96-5afb-4798-9e70-7f385fe215ba","Type":"ContainerDied","Data":"6e7c8195813b95501a26d888a217c17c6d993ce9063ff83f248bfb25d3f41950"} Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132204 4820 scope.go:117] "RemoveContainer" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.132387 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnrdx" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.172906 4820 scope.go:117] "RemoveContainer" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.177701 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.192349 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnrdx"] Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.204764 4820 scope.go:117] "RemoveContainer" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.253856 4820 scope.go:117] "RemoveContainer" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.254383 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": container with ID starting with 171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb not found: ID does not exist" containerID="171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.254451 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb"} err="failed to get container status \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": rpc error: code = NotFound desc = could not find container \"171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb\": container with ID starting with 171d8f0b05c938eca6117eef3b56008cf5be32fec792363849b5457fecdbfbeb not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.254491 4820 scope.go:117] "RemoveContainer" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.255136 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": container with ID starting with d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391 not found: ID does not exist" containerID="d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255232 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391"} err="failed to get container status \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": rpc error: code = NotFound desc = could not find container \"d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391\": container with ID starting with d984136341a2ec18e10daf9aa4991c8b7dc7f09b4d5ad75b5d80b19fb56e3391 not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255326 4820 scope.go:117] "RemoveContainer" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: E0221 09:27:05.255742 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": container with ID starting with 59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702 not found: ID does not exist" containerID="59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.255782 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702"} err="failed to get container status \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": rpc error: code = NotFound desc = could not find container \"59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702\": container with ID starting with 59a2072ea46f27f62d9f15dc6be82e1da05f8e7a3f434521be46a8473303f702 not found: ID does not exist" Feb 21 09:27:05 crc kubenswrapper[4820]: I0221 09:27:05.713190 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" path="/var/lib/kubelet/pods/762a9a96-5afb-4798-9e70-7f385fe215ba/volumes" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.816778 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.818001 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.818080 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.819138 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:27:13 crc kubenswrapper[4820]: I0221 09:27:13.819220 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" gracePeriod=600 Feb 21 09:27:13 crc kubenswrapper[4820]: E0221 09:27:13.954833 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.221897 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" exitCode=0 Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.221967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c"} Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.222000 4820 scope.go:117] "RemoveContainer" containerID="1812e93a7fc1b49a1b4e92bb64b23e3ffb3d863faf7ace601e07e65ec966779b" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.222691 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:14 crc kubenswrapper[4820]: E0221 09:27:14.223011 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.223767 4820 generic.go:334] "Generic (PLEG): container finished" podID="564eff3b-59b2-4e16-af32-03335f96da2f" containerID="26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97" exitCode=0 Feb 21 09:27:14 crc kubenswrapper[4820]: I0221 09:27:14.223803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" event={"ID":"564eff3b-59b2-4e16-af32-03335f96da2f","Type":"ContainerDied","Data":"26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97"} Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.347048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.383003 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.392799 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-kxw2c"] Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.522930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") pod \"564eff3b-59b2-4e16-af32-03335f96da2f\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") pod \"564eff3b-59b2-4e16-af32-03335f96da2f\" (UID: \"564eff3b-59b2-4e16-af32-03335f96da2f\") " Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523211 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host" (OuterVolumeSpecName: "host") pod "564eff3b-59b2-4e16-af32-03335f96da2f" (UID: "564eff3b-59b2-4e16-af32-03335f96da2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.523654 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/564eff3b-59b2-4e16-af32-03335f96da2f-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.528606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc" (OuterVolumeSpecName: "kube-api-access-g6cwc") pod "564eff3b-59b2-4e16-af32-03335f96da2f" (UID: "564eff3b-59b2-4e16-af32-03335f96da2f"). InnerVolumeSpecName "kube-api-access-g6cwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.625037 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cwc\" (UniqueName: \"kubernetes.io/projected/564eff3b-59b2-4e16-af32-03335f96da2f-kube-api-access-g6cwc\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:15 crc kubenswrapper[4820]: I0221 09:27:15.711768 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" path="/var/lib/kubelet/pods/564eff3b-59b2-4e16-af32-03335f96da2f/volumes" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.250961 4820 scope.go:117] "RemoveContainer" containerID="26e8d8f4d7a26bb699737ee8b8b8730e21ba3829bf9fc70bc39b2ed950c95f97" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.251025 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-kxw2c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.556767 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557168 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557180 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557194 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557200 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557211 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557217 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557227 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557246 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557267 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557273 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="extract-utilities" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557293 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557299 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="extract-content" Feb 21 09:27:16 crc kubenswrapper[4820]: E0221 09:27:16.557317 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557322 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557505 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fccb1d-70aa-48f7-a4e6-79411b1641f6" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557536 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="762a9a96-5afb-4798-9e70-7f385fe215ba" containerName="registry-server" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.557547 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="564eff3b-59b2-4e16-af32-03335f96da2f" containerName="container-00" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.558219 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.747524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.747809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.850519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.869097 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"crc-debug-cnf8c\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:16 crc kubenswrapper[4820]: I0221 09:27:16.876081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.262164 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerStarted","Data":"8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831"} Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.262755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerStarted","Data":"5b92906d94e5a8e08553c92ea3dd1a5ffb9353aadeece8d948bc43af9cb01e9c"} Feb 21 09:27:17 crc kubenswrapper[4820]: I0221 09:27:17.283464 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" podStartSLOduration=1.28344655 podStartE2EDuration="1.28344655s" podCreationTimestamp="2026-02-21 09:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-21 09:27:17.273605002 +0000 UTC m=+9612.306689200" watchObservedRunningTime="2026-02-21 09:27:17.28344655 +0000 UTC m=+9612.316530748" Feb 21 09:27:18 crc kubenswrapper[4820]: I0221 09:27:18.273134 4820 generic.go:334] "Generic (PLEG): container finished" podID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerID="8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831" exitCode=0 Feb 21 09:27:18 crc kubenswrapper[4820]: I0221 09:27:18.273170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" event={"ID":"de2bf205-3558-4ec5-bea0-da1be48389d3","Type":"ContainerDied","Data":"8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831"} Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.394351 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.507797 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") pod \"de2bf205-3558-4ec5-bea0-da1be48389d3\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.507863 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host" (OuterVolumeSpecName: "host") pod "de2bf205-3558-4ec5-bea0-da1be48389d3" (UID: "de2bf205-3558-4ec5-bea0-da1be48389d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.508104 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") pod \"de2bf205-3558-4ec5-bea0-da1be48389d3\" (UID: \"de2bf205-3558-4ec5-bea0-da1be48389d3\") " Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.508560 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/de2bf205-3558-4ec5-bea0-da1be48389d3-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.515007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5" (OuterVolumeSpecName: "kube-api-access-d8mq5") pod "de2bf205-3558-4ec5-bea0-da1be48389d3" (UID: "de2bf205-3558-4ec5-bea0-da1be48389d3"). InnerVolumeSpecName "kube-api-access-d8mq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:19 crc kubenswrapper[4820]: I0221 09:27:19.611383 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8mq5\" (UniqueName: \"kubernetes.io/projected/de2bf205-3558-4ec5-bea0-da1be48389d3-kube-api-access-d8mq5\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.238058 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.247535 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-cnf8c"] Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.291752 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b92906d94e5a8e08553c92ea3dd1a5ffb9353aadeece8d948bc43af9cb01e9c" Feb 21 09:27:20 crc kubenswrapper[4820]: I0221 09:27:20.291802 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-cnf8c" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.466227 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:21 crc kubenswrapper[4820]: E0221 09:27:21.467164 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.467187 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.467695 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" containerName="container-00" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.468760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.656990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.657399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.710560 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2bf205-3558-4ec5-bea0-da1be48389d3" path="/var/lib/kubelet/pods/de2bf205-3558-4ec5-bea0-da1be48389d3/volumes" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.759996 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.760120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.760146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:21 crc kubenswrapper[4820]: I0221 09:27:21.794072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"crc-debug-lzdkq\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:22 crc kubenswrapper[4820]: I0221 09:27:22.092068 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:22 crc kubenswrapper[4820]: W0221 09:27:22.134226 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c9fee18_cd2d_4504_baf5_9759037795cd.slice/crio-3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d WatchSource:0}: Error finding container 3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d: Status 404 returned error can't find the container with id 3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d Feb 21 09:27:22 crc kubenswrapper[4820]: I0221 09:27:22.319392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" event={"ID":"5c9fee18-cd2d-4504-baf5-9759037795cd","Type":"ContainerStarted","Data":"3c4066be45f1a5eca580ba5d04d4b37d569e8ecbbfd20114091ed136e518134d"} Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.329699 4820 generic.go:334] "Generic (PLEG): container finished" podID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerID="e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4" exitCode=0 Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.329754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" event={"ID":"5c9fee18-cd2d-4504-baf5-9759037795cd","Type":"ContainerDied","Data":"e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4"} Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.370176 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:23 crc kubenswrapper[4820]: I0221 09:27:23.381852 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/crc-debug-lzdkq"] Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.464520 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624095 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") pod \"5c9fee18-cd2d-4504-baf5-9759037795cd\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624288 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host" (OuterVolumeSpecName: "host") pod "5c9fee18-cd2d-4504-baf5-9759037795cd" (UID: "5c9fee18-cd2d-4504-baf5-9759037795cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.624482 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") pod \"5c9fee18-cd2d-4504-baf5-9759037795cd\" (UID: \"5c9fee18-cd2d-4504-baf5-9759037795cd\") " Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.625154 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9fee18-cd2d-4504-baf5-9759037795cd-host\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.630176 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn" (OuterVolumeSpecName: "kube-api-access-4xxsn") pod "5c9fee18-cd2d-4504-baf5-9759037795cd" (UID: "5c9fee18-cd2d-4504-baf5-9759037795cd"). InnerVolumeSpecName "kube-api-access-4xxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:27:24 crc kubenswrapper[4820]: I0221 09:27:24.726699 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xxsn\" (UniqueName: \"kubernetes.io/projected/5c9fee18-cd2d-4504-baf5-9759037795cd-kube-api-access-4xxsn\") on node \"crc\" DevicePath \"\"" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.350726 4820 scope.go:117] "RemoveContainer" containerID="e1e3226d565bdc1b5e1bcafd76465c8d1d2592a3fc0bd3fe18e178ce2f470ed4" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.350880 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/crc-debug-lzdkq" Feb 21 09:27:25 crc kubenswrapper[4820]: I0221 09:27:25.708981 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" path="/var/lib/kubelet/pods/5c9fee18-cd2d-4504-baf5-9759037795cd/volumes" Feb 21 09:27:26 crc kubenswrapper[4820]: I0221 09:27:26.697138 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:26 crc kubenswrapper[4820]: E0221 09:27:26.698060 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:41 crc kubenswrapper[4820]: I0221 09:27:41.697173 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:41 crc kubenswrapper[4820]: E0221 09:27:41.698053 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:27:52 crc kubenswrapper[4820]: I0221 09:27:52.696819 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:27:52 crc kubenswrapper[4820]: E0221 09:27:52.697670 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:05 crc kubenswrapper[4820]: I0221 09:28:05.703214 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:05 crc kubenswrapper[4820]: E0221 09:28:05.704104 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:20 crc kubenswrapper[4820]: I0221 09:28:20.696628 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:20 crc kubenswrapper[4820]: E0221 09:28:20.697260 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:35 crc kubenswrapper[4820]: I0221 09:28:35.705009 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:35 crc kubenswrapper[4820]: E0221 09:28:35.705766 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:28:47 crc kubenswrapper[4820]: I0221 09:28:47.697697 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:28:47 crc kubenswrapper[4820]: E0221 09:28:47.698893 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:01 crc kubenswrapper[4820]: I0221 09:29:01.697326 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:01 crc kubenswrapper[4820]: E0221 09:29:01.699067 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.683197 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/init-config-reloader/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.839463 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/init-config-reloader/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.887207 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/alertmanager/0.log" Feb 21 09:29:12 crc kubenswrapper[4820]: I0221 09:29:12.908093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0042658c-e832-4073-894f-78a25bcdb5f9/config-reloader/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.054844 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-api/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.106127 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-listener/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.110549 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-evaluator/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.227604 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_77710997-adc1-48de-a5bd-d2e00959d510/aodh-notifier/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.268808 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cf69c945b-fsc4w_08d7d55d-2b0b-40fe-9b1c-5930358bebe8/barbican-api/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.287885 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cf69c945b-fsc4w_08d7d55d-2b0b-40fe-9b1c-5930358bebe8/barbican-api-log/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.446610 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-754674bd8d-6lxjs_d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4/barbican-keystone-listener/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.644831 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769cf6fd65-dfls2_c1f442bc-072b-483e-8821-3ee262e5aa4e/barbican-worker/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.699077 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:13 crc kubenswrapper[4820]: E0221 09:29:13.700531 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.762463 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769cf6fd65-dfls2_c1f442bc-072b-483e-8821-3ee262e5aa4e/barbican-worker-log/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.893440 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-h8h82_b328f114-e2a2-4fe6-9e6d-bf8a99364733/bootstrap-openstack-openstack-cell1/0.log" Feb 21 09:29:13 crc kubenswrapper[4820]: I0221 09:29:13.899637 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-754674bd8d-6lxjs_d22d2988-dfb7-4dd4-95d1-ef68ca0ad9e4/barbican-keystone-listener-log/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.056116 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/ceilometer-central-agent/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.098171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/ceilometer-notification-agent/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.112783 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/proxy-httpd/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.254360 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_26462812-349d-4dc0-ac4b-3d89ebeb997c/sg-core/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.320671 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a23af3b4-b486-43b2-b02c-da7b8937e091/cinder-api-log/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.334017 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a23af3b4-b486-43b2-b02c-da7b8937e091/cinder-api/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.545317 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77665b9b-37d6-4277-a75b-e30637b4b269/cinder-scheduler/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.560262 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77665b9b-37d6-4277-a75b-e30637b4b269/probe/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.764852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-hs6l2_979ca93e-175b-4fde-b503-0be2b59e1a99/configure-network-openstack-openstack-cell1/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.826285 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-cggfb_ceace068-0023-4d48-b24d-30cafb14db01/configure-os-openstack-openstack-cell1/0.log" Feb 21 09:29:14 crc kubenswrapper[4820]: I0221 09:29:14.924980 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/init/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.145920 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/init/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.162944 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-bdzjs_26d06bf4-eb66-4688-a6ba-292af8a3b9f5/download-cache-openstack-openstack-cell1/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.181462 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6dfc499f-dvr9b_6c431de9-6c4a-4279-a63a-bd6742fc68f0/dnsmasq-dns/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.370524 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c/glance-httpd/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.411983 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b011fd9-d35f-4a3d-b9a9-6881ddce8f3c/glance-log/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.593055 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8b461284-e512-4b62-95ae-fc82b119c340/glance-httpd/0.log" Feb 21 09:29:15 crc kubenswrapper[4820]: I0221 09:29:15.600759 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8b461284-e512-4b62-95ae-fc82b119c340/glance-log/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.036405 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7fbc8dc6-rvrvw_f98ac827-2c89-4d1b-afc3-a5bd668b5d60/heat-engine/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.089571 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-c9d48c7f5-9ghjf_55b82e21-7221-4043-b9a8-5ac5853acaa1/heat-api/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.321470 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5879b888bd-q5njq_d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6/horizon/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.463479 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-pdbm6_ddf72439-0ca3-4cbc-8186-fe74744a71e4/install-certs-openstack-openstack-cell1/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.502372 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-d46b7f59f-tgv4t_c1f86beb-e638-4e60-a435-b09e2c01e733/heat-cfnapi/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.649765 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-79fjr_8f2548bf-793b-464b-9659-2962669f353e/install-os-openstack-openstack-cell1/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.752088 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29527741-49n79_7c3e367e-0369-46eb-8886-a7d40b0a6626/keystone-cron/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.812352 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5879b888bd-q5njq_d9e7e0f0-31a5-4fb5-9397-851ab7dabbf6/horizon-log/0.log" Feb 21 09:29:16 crc kubenswrapper[4820]: I0221 09:29:16.992820 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_478142ab-f7fa-4bbd-9051-6d1f5e16a9e2/kube-state-metrics/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.195698 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-vxt45_d646e04b-4083-4b58-a73f-47c72ba78dcc/libvirt-openstack-openstack-cell1/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.665494 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67f7f95649-vvsjb_546bedfc-a666-471b-9a9f-e4f4dd1e629e/neutron-httpd/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.692424 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-fcdf4b996-mcbdr_1f763cab-817e-415e-bb73-4e077fa0c745/keystone-api/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.894410 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-brfqb_0e84eaf9-2cd2-457c-b532-d632db99ba6e/neutron-dhcp-openstack-openstack-cell1/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.939468 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67f7f95649-vvsjb_546bedfc-a666-471b-9a9f-e4f4dd1e629e/neutron-api/0.log" Feb 21 09:29:17 crc kubenswrapper[4820]: I0221 09:29:17.963451 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-49ck6_915c12d6-5a69-4e4b-a001-b9e865d4377b/neutron-metadata-openstack-openstack-cell1/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.256437 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-z2jbg_f751ca69-8835-4c27-b4ab-9dac973aacd6/neutron-sriov-openstack-openstack-cell1/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.605036 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d4de2ed9-8828-4c5e-af1e-24c752565d74/nova-cell0-conductor-conductor/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.675855 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_eae0a5ff-41ba-4522-a7f0-e69ff23ee566/nova-api-log/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.839234 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_eae0a5ff-41ba-4522-a7f0-e69ff23ee566/nova-api-api/0.log" Feb 21 09:29:18 crc kubenswrapper[4820]: I0221 09:29:18.963986 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1747f740-f880-4c19-817b-c9341c1179e7/nova-cell1-conductor-conductor/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.183529 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fb1fd00e-e5fe-4977-91db-dc6b86e63e34/nova-cell1-novncproxy-novncproxy/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.279508 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmxcx5_2666b573-2e76-4374-9fd9-39ac7aabddef/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.435913 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-w4sqf_c653de2c-8672-42fb-81c0-4e66975a3b8f/nova-cell1-openstack-openstack-cell1/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.590372 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77c9db30-edab-4679-a671-15ae25d6448b/nova-metadata-log/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.841789 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4d1667b0-00cb-4768-97cb-de0ee527f829/nova-scheduler-scheduler/0.log" Feb 21 09:29:19 crc kubenswrapper[4820]: I0221 09:29:19.980852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.231668 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77c9db30-edab-4679-a671-15ae25d6448b/nova-metadata-metadata/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.465199 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.471812 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0a14fdd-7df9-4cac-aa21-b4562f320fcc/galera/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.558578 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.717846 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/mysql-bootstrap/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.754133 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_21d2b3a6-8a28-4287-8953-23782681799a/galera/0.log" Feb 21 09:29:20 crc kubenswrapper[4820]: I0221 09:29:20.849782 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c888e608-8215-44cd-a30b-43b1c34b5685/openstackclient/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.021024 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f9b120b4-ea8d-499d-a8ca-43faa31f000e/ovn-northd/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.063112 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f9b120b4-ea8d-499d-a8ca-43faa31f000e/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.233421 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0292096a-9b13-475a-971c-cf4dae1a3f8f/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.280367 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-hxv8b_7b3e6252-4e79-4ce6-87f1-8b0e8c885536/ovn-openstack-openstack-cell1/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.385328 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0292096a-9b13-475a-971c-cf4dae1a3f8f/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.488495 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.574579 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_1ff1c87b-f0e7-4917-a5ce-291ff2b6bd37/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.686040 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c7377f38-4907-4b1d-a339-f274c122ef5c/openstack-network-exporter/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.698394 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c7377f38-4907-4b1d-a339-f274c122ef5c/ovsdbserver-nb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.925666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6aaa256c-7102-4960-ade0-b903b29b2716/ovsdbserver-sb/0.log" Feb 21 09:29:21 crc kubenswrapper[4820]: I0221 09:29:21.932598 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6aaa256c-7102-4960-ade0-b903b29b2716/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.114080 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_66a6723b-ff49-4d22-a6cd-1e9509165729/ovsdbserver-sb/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.123170 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_66a6723b-ff49-4d22-a6cd-1e9509165729/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.234335 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_dcf6ab13-da71-49ec-b2dc-27602f1a953f/openstack-network-exporter/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.354257 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_dcf6ab13-da71-49ec-b2dc-27602f1a953f/ovsdbserver-sb/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.539916 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64bd48f99b-s6zl2_924c1ab4-a83b-4ab0-9c80-b77489d668f7/placement-api/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.623220 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64bd48f99b-s6zl2_924c1ab4-a83b-4ab0-9c80-b77489d668f7/placement-log/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.659200 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c7rd9n_d5f7b8c5-1ad0-4d18-bf56-89197679507f/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Feb 21 09:29:22 crc kubenswrapper[4820]: I0221 09:29:22.852724 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/init-config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.078091 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/thanos-sidecar/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.081782 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.084751 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/init-config-reloader/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.114952 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0c81808a-06e3-4353-b7a6-56ff53f15b69/prometheus/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.311008 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.474144 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.542374 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57d094d7-d5d2-4276-b0c2-cb98a15c0c3d/rabbitmq/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.554562 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.803851 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/setup-container/0.log" Feb 21 09:29:23 crc kubenswrapper[4820]: I0221 09:29:23.879405 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-42cjk_4449546f-cb82-4976-b53e-cad851a6369d/reboot-os-openstack-openstack-cell1/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.339602 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-57cnm_4ade5366-52be-4c8f-b9e2-1088b04caa90/run-os-openstack-openstack-cell1/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.576058 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-4pwnt_2090d99c-7240-49ef-85d8-187c0cd6c146/ssh-known-hosts-openstack/0.log" Feb 21 09:29:24 crc kubenswrapper[4820]: I0221 09:29:24.816593 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cc65c7f54-9sg96_1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d/proxy-server/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.016957 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-8nnn7_867214ab-adcb-4e78-838b-a16cda8f543c/swift-ring-rebalance/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.031195 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cc65c7f54-9sg96_1a6cc6cf-14b3-416d-a415-22fbe0dd9b9d/proxy-httpd/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.278224 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-wpbzs_dab763aa-fd5e-41b2-96d8-f758ad76f779/telemetry-openstack-openstack-cell1/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.389868 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8195e98f-70c8-4758-9d0a-e3a95de45075/rabbitmq/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.479332 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_417782d7-a42e-4872-9e2d-0f11848812cd/tempest-tests-tempest-tests-runner/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.539146 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fe061f11-8b08-455e-856c-ac81ff40d655/test-operator-logs-container/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.797514 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-pz5bk_8acec915-5e23-4212-9bce-50fec475c433/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Feb 21 09:29:25 crc kubenswrapper[4820]: I0221 09:29:25.839525 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-wn9jn_15b9de10-7535-4310-9681-2d0171fb4376/validate-network-openstack-openstack-cell1/0.log" Feb 21 09:29:28 crc kubenswrapper[4820]: I0221 09:29:28.697219 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:28 crc kubenswrapper[4820]: E0221 09:29:28.697719 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:39 crc kubenswrapper[4820]: I0221 09:29:39.220502 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4c039fd9-87df-497c-8e40-f9b5d2759d0f/memcached/0.log" Feb 21 09:29:42 crc kubenswrapper[4820]: I0221 09:29:42.697301 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:42 crc kubenswrapper[4820]: E0221 09:29:42.698094 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.633253 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.697212 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:29:57 crc kubenswrapper[4820]: E0221 09:29:57.697566 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.786904 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.800180 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.818153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.962171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/pull/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.967214 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/util/0.log" Feb 21 09:29:57 crc kubenswrapper[4820]: I0221 09:29:57.984655 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967lk2ff_47790790-d956-41e0-8868-9fb9fecfefe7/extract/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.402493 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-7fq9h_f8b2e5d3-e795-4971-92d9-f0d8f6586fa8/manager/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.826852 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-gbtvh_f8cd79d8-6ba2-467c-95b5-4d965d73ed75/manager/0.log" Feb 21 09:29:58 crc kubenswrapper[4820]: I0221 09:29:58.959666 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-tlx7z_a4f64d1a-4768-48e1-8a88-fbf906956528/manager/0.log" Feb 21 09:29:59 crc kubenswrapper[4820]: I0221 09:29:59.163007 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-t6t6b_7ab15a3b-5688-4d42-b99a-e88bb8b11f65/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.160286 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:00 crc kubenswrapper[4820]: E0221 09:30:00.161209 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.161225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.161434 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9fee18-cd2d-4504-baf5-9759037795cd" containerName="container-00" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.176935 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.179820 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.180052 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.192540 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.206885 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fj4tn_4f343be8-a654-43ac-938a-6b726caab1ad/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.300718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.301036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.301178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402686 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.402932 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.404732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.411402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.419802 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-qvl8t_2ae82741-a73e-4d45-852f-a206550cb1e9/manager/0.log" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.426782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"collect-profiles-29527770-tpb8t\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:00 crc kubenswrapper[4820]: I0221 09:30:00.510804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.254565 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t"] Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.255904 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-pbn9f_047df55d-9730-4215-bbd5-73fd59a0e9f5/manager/0.log" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.257837 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-lgdx6_903ed1dc-819c-4ed9-86f6-ca32e4f96792/manager/0.log" Feb 21 09:30:01 crc kubenswrapper[4820]: I0221 09:30:01.983691 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-gxpq6_b248c78b-0213-4833-8d04-7d2514c2e673/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.207287 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-lzhqv_9ec17569-aac1-4b58-8efc-b5a483e47a71/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.268409 4820 generic.go:334] "Generic (PLEG): container finished" podID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerID="c2a97cd59dc207b1a15728870570ab992ee2d7a09fce3da6d739cd4c1be9594f" exitCode=0 Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.268667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerDied","Data":"c2a97cd59dc207b1a15728870570ab992ee2d7a09fce3da6d739cd4c1be9594f"} Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.269228 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerStarted","Data":"a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c"} Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.578803 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-c96wv_2b4b6741-5442-4ef0-a8e1-49e389157cd4/manager/0.log" Feb 21 09:30:02 crc kubenswrapper[4820]: I0221 09:30:02.778337 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-hdbhf_c4453479-1bc9-4393-8853-396ec6ae4f7f/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.235778 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-lx4sd_3c9c6322-ba57-47b3-a079-ab86a6660c45/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.343171 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-l85mk_b7cb4a9f-82fd-41b1-8175-351de45fde99/operator/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.610188 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.763874 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.764039 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.764172 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") pod \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\" (UID: \"a4db266a-21eb-4ca0-bdb5-af37ccd720d9\") " Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.765185 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.769965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg" (OuterVolumeSpecName: "kube-api-access-kt4qg") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "kube-api-access-kt4qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.776515 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4db266a-21eb-4ca0-bdb5-af37ccd720d9" (UID: "a4db266a-21eb-4ca0-bdb5-af37ccd720d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.837707 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-lrgjm_76209e29-400d-4677-85b5-89c5f4e9323a/manager/0.log" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869429 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4qg\" (UniqueName: \"kubernetes.io/projected/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-kube-api-access-kt4qg\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869461 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.869470 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4db266a-21eb-4ca0-bdb5-af37ccd720d9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 21 09:30:03 crc kubenswrapper[4820]: I0221 09:30:03.881569 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tt62z_2af934a2-6680-4932-b3af-5f8bdee6c740/registry-server/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.003105 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-2dfxn_9ccb51d6-d1d0-4e04-a63d-e01da9cbfab1/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.141484 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-n6dpn_18cf798f-3eea-4e15-8bb1-bda4895ffed4/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" event={"ID":"a4db266a-21eb-4ca0-bdb5-af37ccd720d9","Type":"ContainerDied","Data":"a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c"} Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288846 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0da9765a9760da0bb06dd06484a2f0660c3ef51d3403aa53bb70b85d05e147c" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.288908 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29527770-tpb8t" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.404939 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wv5gr_fde95ed3-63bc-4401-b8b8-539da71db026/operator/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.503363 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-cv9cl_412bd84a-46bb-49b9-8d0a-17d6cc683ea0/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.684884 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.699215 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29527725-k4st8"] Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.841456 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-whrpt_b425a24f-112c-4e36-a173-21a59ce15ef0/manager/0.log" Feb 21 09:30:04 crc kubenswrapper[4820]: I0221 09:30:04.988934 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-jdxhc_246cc20b-aa24-4c15-8eb7-659e10b21e92/manager/0.log" Feb 21 09:30:05 crc kubenswrapper[4820]: I0221 09:30:05.035402 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-jt2g2_ee323e4c-82c4-4b71-b69b-5aef22e36516/manager/0.log" Feb 21 09:30:05 crc kubenswrapper[4820]: I0221 09:30:05.709351 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44294e9-1a1b-421f-bed6-f72a8bb45e1d" path="/var/lib/kubelet/pods/e44294e9-1a1b-421f-bed6-f72a8bb45e1d/volumes" Feb 21 09:30:07 crc kubenswrapper[4820]: I0221 09:30:07.007679 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-f84fz_5424a0f0-819f-46e7-9d7d-00bbe249e4a9/manager/0.log" Feb 21 09:30:07 crc kubenswrapper[4820]: I0221 09:30:07.328626 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-54dzd_d922fcc6-f8a7-451a-b998-fc04189a6d85/manager/0.log" Feb 21 09:30:12 crc kubenswrapper[4820]: I0221 09:30:12.696795 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:12 crc kubenswrapper[4820]: E0221 09:30:12.697422 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:24 crc kubenswrapper[4820]: I0221 09:30:24.697330 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:24 crc kubenswrapper[4820]: E0221 09:30:24.697977 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.717531 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zl5zd_3b64a6e2-e14a-4de0-8630-e617a55b0794/control-plane-machine-set-operator/0.log" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.919819 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zz4sx_8add43c0-9280-4e92-b4fe-4628eb645e56/kube-rbac-proxy/0.log" Feb 21 09:30:25 crc kubenswrapper[4820]: I0221 09:30:25.968869 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zz4sx_8add43c0-9280-4e92-b4fe-4628eb645e56/machine-api-operator/0.log" Feb 21 09:30:38 crc kubenswrapper[4820]: I0221 09:30:38.162500 4820 scope.go:117] "RemoveContainer" containerID="d355316426a1db688b7e0f637002731b78bea683453439286ba724dcfa414dc2" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.002001 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-mf4f6_d35515e4-d029-4f6a-be2a-d7ea32ab06ad/cert-manager-controller/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.232651 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-9lkfz_3a53f347-c86d-4ef3-82c2-29549135afe6/cert-manager-cainjector/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.233448 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-r5ddv_e88f2404-d287-429a-a995-ea8be7fa5be8/cert-manager-webhook/0.log" Feb 21 09:30:39 crc kubenswrapper[4820]: I0221 09:30:39.697682 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:39 crc kubenswrapper[4820]: E0221 09:30:39.698206 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.698594 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-b5kf2_15902f84-d2f7-42a0-929e-89c21cffddd8/nmstate-console-plugin/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.882738 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tz942_a6c76731-bd23-43eb-84f6-84d675965035/nmstate-handler/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.983816 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m6svj_b7930d8a-8ded-4552-9c0a-aa73fa2006e2/kube-rbac-proxy/0.log" Feb 21 09:30:51 crc kubenswrapper[4820]: I0221 09:30:51.993300 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-m6svj_b7930d8a-8ded-4552-9c0a-aa73fa2006e2/nmstate-metrics/0.log" Feb 21 09:30:52 crc kubenswrapper[4820]: I0221 09:30:52.148424 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-4rnft_375887b5-9d2e-4af8-9128-789ebd290f97/nmstate-operator/0.log" Feb 21 09:30:52 crc kubenswrapper[4820]: I0221 09:30:52.198452 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-c8gmp_62b9a00a-9b7e-4057-bc85-2a16c48957f4/nmstate-webhook/0.log" Feb 21 09:30:53 crc kubenswrapper[4820]: I0221 09:30:53.696734 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:30:53 crc kubenswrapper[4820]: E0221 09:30:53.697306 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:04 crc kubenswrapper[4820]: I0221 09:31:04.936793 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lw5b9_b371e087-d814-4a0f-9ff3-d55d20e24544/prometheus-operator/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.097496 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-p7twl_33a57c79-5f59-4436-802e-2be346a7f24b/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.116817 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv_6c94be0a-30e4-454d-a744-be2161cdbed2/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.295452 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t74mh_dab6a090-8dce-4a3c-aa4a-467c37f77510/operator/0.log" Feb 21 09:31:05 crc kubenswrapper[4820]: I0221 09:31:05.314415 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m42j5_33c8ba11-479e-4bbc-87c4-0d6da77be2eb/perses-operator/0.log" Feb 21 09:31:07 crc kubenswrapper[4820]: I0221 09:31:07.697109 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:07 crc kubenswrapper[4820]: E0221 09:31:07.697964 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.172090 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jrcl5_6f342ec6-aed8-48ff-a1ba-9d6634bda927/kube-rbac-proxy/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.447305 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.584771 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jrcl5_6f342ec6-aed8-48ff-a1ba-9d6634bda927/controller/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.701212 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:19 crc kubenswrapper[4820]: E0221 09:31:19.701489 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.734371 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.762680 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.765843 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.787093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.975173 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:19 crc kubenswrapper[4820]: I0221 09:31:19.976663 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.003191 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.010412 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.167056 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.197676 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/controller/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.201950 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-frr-files/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.223755 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/cp-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.402021 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/frr-metrics/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.454555 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/kube-rbac-proxy/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.467383 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/kube-rbac-proxy-frr/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.647163 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/reloader/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.670419 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-jw8nq_a5c8b64a-a6da-435e-a87d-bd397ad045a4/frr-k8s-webhook-server/0.log" Feb 21 09:31:20 crc kubenswrapper[4820]: I0221 09:31:20.903164 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9bd6bbfc6-srwvl_bc44a7fe-6bdf-4d85-a2aa-aeafa3d1d74d/manager/0.log" Feb 21 09:31:21 crc kubenswrapper[4820]: I0221 09:31:21.085678 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c68698666-cvwrd_e17110d4-51ce-4fca-a5e7-ba4eedeb42a8/webhook-server/0.log" Feb 21 09:31:21 crc kubenswrapper[4820]: I0221 09:31:21.134711 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwv62_cc577a47-69e2-4ae2-93c1-e922f0c6e3d8/kube-rbac-proxy/0.log" Feb 21 09:31:22 crc kubenswrapper[4820]: I0221 09:31:22.018098 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwv62_cc577a47-69e2-4ae2-93c1-e922f0c6e3d8/speaker/0.log" Feb 21 09:31:23 crc kubenswrapper[4820]: I0221 09:31:23.537755 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dr9qm_2827f692-18f9-4d32-b7bd-636d595a008f/frr/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.549956 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.799228 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.812020 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:33 crc kubenswrapper[4820]: I0221 09:31:33.858686 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.027446 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.063346 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.244753 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbdd7_a332a364-5157-4e4a-8313-7b267a41ac97/extract/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.393285 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.423373 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.457847 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.457918 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.615535 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.655909 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.656034 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p9lrm_f69dedde-7358-4e63-b7b3-cc4ff8c1258e/extract/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.697232 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:34 crc kubenswrapper[4820]: E0221 09:31:34.697566 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.798653 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.937428 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.969654 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:34 crc kubenswrapper[4820]: I0221 09:31:34.979150 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.169451 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/pull/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.194816 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/util/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.197786 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213q5ltp_2e4047bc-d968-4163-82f1-13cecd18893e/extract/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.335319 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.496134 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.500727 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:35 crc kubenswrapper[4820]: I0221 09:31:35.545550 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.150819 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.218619 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.338548 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.663598 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.740625 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.754675 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:36 crc kubenswrapper[4820]: I0221 09:31:36.897912 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-utilities/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.040134 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/extract-content/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.386426 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.428859 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9t7gg_c232aa63-d98b-4e40-9efb-00e3eff02b50/registry-server/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.499961 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.593345 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:37 crc kubenswrapper[4820]: I0221 09:31:37.626920 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.172153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h8lnd_d5791a2a-f861-4564-b560-cef4e1d2b529/registry-server/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.196007 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/util/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.225502 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/extract/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.232263 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadc5gx_9e889767-aefe-4149-8677-fd116ae8d598/pull/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.351288 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wq5r9_37683f41-a9aa-4abd-809d-25df5114e93a/marketplace-operator/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.400597 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.556830 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.573709 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.588076 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.757148 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-content/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.779855 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/extract-utilities/0.log" Feb 21 09:31:38 crc kubenswrapper[4820]: I0221 09:31:38.861076 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.015911 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.064406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.091182 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78dnb_ef1d43db-e76a-4d34-8528-4c549bcbc2e2/registry-server/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.124038 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.267768 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-utilities/0.log" Feb 21 09:31:39 crc kubenswrapper[4820]: I0221 09:31:39.305850 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/extract-content/0.log" Feb 21 09:31:40 crc kubenswrapper[4820]: I0221 09:31:40.539159 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-drqmx_fa04064f-b88b-4b27-a882-1cbdae3d4485/registry-server/0.log" Feb 21 09:31:45 crc kubenswrapper[4820]: I0221 09:31:45.704553 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:45 crc kubenswrapper[4820]: E0221 09:31:45.705684 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:31:51 crc kubenswrapper[4820]: I0221 09:31:51.981100 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lw5b9_b371e087-d814-4a0f-9ff3-d55d20e24544/prometheus-operator/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.010061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-qdtrv_6c94be0a-30e4-454d-a744-be2161cdbed2/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.024140 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-89dc89b99-p7twl_33a57c79-5f59-4436-802e-2be346a7f24b/prometheus-operator-admission-webhook/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.155032 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t74mh_dab6a090-8dce-4a3c-aa4a-467c37f77510/operator/0.log" Feb 21 09:31:52 crc kubenswrapper[4820]: I0221 09:31:52.224723 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m42j5_33c8ba11-479e-4bbc-87c4-0d6da77be2eb/perses-operator/0.log" Feb 21 09:31:57 crc kubenswrapper[4820]: E0221 09:31:57.552100 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:35156->38.102.83.201:43255: write tcp 38.102.83.201:35156->38.102.83.201:43255: write: broken pipe Feb 21 09:31:57 crc kubenswrapper[4820]: I0221 09:31:57.696981 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:31:57 crc kubenswrapper[4820]: E0221 09:31:57.697579 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.677823 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:10 crc kubenswrapper[4820]: E0221 09:32:10.678787 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.678805 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.679089 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4db266a-21eb-4ca0-bdb5-af37ccd720d9" containerName="collect-profiles" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.680951 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.691822 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.696536 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:32:10 crc kubenswrapper[4820]: E0221 09:32:10.696805 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qth8z_openshift-machine-config-operator(ce38546e-524f-4801-8ee1-b4bb9d6c6dff)\"" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.776558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.777362 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.777553 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.879825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.880348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.880608 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:10 crc kubenswrapper[4820]: I0221 09:32:10.901827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"certified-operators-cdpzw\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:11 crc kubenswrapper[4820]: I0221 09:32:11.001619 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:11 crc kubenswrapper[4820]: I0221 09:32:11.516133 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424446 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" exitCode=0 Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424514 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4"} Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.424690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"bd4acb78834dcfa9f90a8b8d7183d3227058b4942670ca06e6201e726bc94f40"} Feb 21 09:32:12 crc kubenswrapper[4820]: I0221 09:32:12.427878 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.263985 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.267802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.282552 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355780 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.355909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.436407 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.457713 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.458184 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.458298 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.478024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"redhat-marketplace-llzgs\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:13 crc kubenswrapper[4820]: I0221 09:32:13.584160 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.063162 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.446623 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" exitCode=0 Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.446667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266"} Feb 21 09:32:14 crc kubenswrapper[4820]: I0221 09:32:14.447051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"fce0bbbe9c0db17ad430e4a8439b4f17e67bbff84360a08e20f939148a9ce9fd"} Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.471977 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.477794 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" exitCode=0 Feb 21 09:32:15 crc kubenswrapper[4820]: I0221 09:32:15.477858 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.491511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerStarted","Data":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.494816 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" exitCode=0 Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.494847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} Feb 21 09:32:16 crc kubenswrapper[4820]: I0221 09:32:16.519832 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdpzw" podStartSLOduration=3.057871767 podStartE2EDuration="6.519813289s" podCreationTimestamp="2026-02-21 09:32:10 +0000 UTC" firstStartedPulling="2026-02-21 09:32:12.427549289 +0000 UTC m=+9907.460633487" lastFinishedPulling="2026-02-21 09:32:15.889490811 +0000 UTC m=+9910.922575009" observedRunningTime="2026-02-21 09:32:16.51765385 +0000 UTC m=+9911.550738048" watchObservedRunningTime="2026-02-21 09:32:16.519813289 +0000 UTC m=+9911.552897477" Feb 21 09:32:17 crc kubenswrapper[4820]: I0221 09:32:17.506842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerStarted","Data":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} Feb 21 09:32:17 crc kubenswrapper[4820]: I0221 09:32:17.527043 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-llzgs" podStartSLOduration=2.108247057 podStartE2EDuration="4.527028s" podCreationTimestamp="2026-02-21 09:32:13 +0000 UTC" firstStartedPulling="2026-02-21 09:32:14.448296983 +0000 UTC m=+9909.481381181" lastFinishedPulling="2026-02-21 09:32:16.867077926 +0000 UTC m=+9911.900162124" observedRunningTime="2026-02-21 09:32:17.524133211 +0000 UTC m=+9912.557217449" watchObservedRunningTime="2026-02-21 09:32:17.527028 +0000 UTC m=+9912.560112198" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.004203 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.004932 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.059033 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:21 crc kubenswrapper[4820]: I0221 09:32:21.594156 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:22 crc kubenswrapper[4820]: I0221 09:32:22.655039 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:22 crc kubenswrapper[4820]: I0221 09:32:22.697063 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.565606 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.565742 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdpzw" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" containerID="cri-o://0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" gracePeriod=2 Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.585098 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.586196 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:23 crc kubenswrapper[4820]: I0221 09:32:23.663084 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.101691 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.207819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.207921 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.208081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") pod \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\" (UID: \"8d56d629-33f3-48af-b7f5-acc9cd1c206c\") " Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.209380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities" (OuterVolumeSpecName: "utilities") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.226313 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw" (OuterVolumeSpecName: "kube-api-access-kxsmw") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "kube-api-access-kxsmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.311454 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.311497 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxsmw\" (UniqueName: \"kubernetes.io/projected/8d56d629-33f3-48af-b7f5-acc9cd1c206c-kube-api-access-kxsmw\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.334829 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d56d629-33f3-48af-b7f5-acc9cd1c206c" (UID: "8d56d629-33f3-48af-b7f5-acc9cd1c206c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.413866 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d56d629-33f3-48af-b7f5-acc9cd1c206c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.576631 4820 generic.go:334] "Generic (PLEG): container finished" podID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" exitCode=0 Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.577939 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdpzw" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583185 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583270 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdpzw" event={"ID":"8d56d629-33f3-48af-b7f5-acc9cd1c206c","Type":"ContainerDied","Data":"bd4acb78834dcfa9f90a8b8d7183d3227058b4942670ca06e6201e726bc94f40"} Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.583299 4820 scope.go:117] "RemoveContainer" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.607075 4820 scope.go:117] "RemoveContainer" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.623411 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.637292 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdpzw"] Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.660415 4820 scope.go:117] "RemoveContainer" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.682537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.712730 4820 scope.go:117] "RemoveContainer" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.713831 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": container with ID starting with 0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310 not found: ID does not exist" containerID="0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.713859 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310"} err="failed to get container status \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": rpc error: code = NotFound desc = could not find container \"0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310\": container with ID starting with 0d921aa9818cebc4070e0fee416a2787c5124bf5718ba2bcca5a5d8cf8816310 not found: ID does not exist" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.713877 4820 scope.go:117] "RemoveContainer" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.714181 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": container with ID starting with bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58 not found: ID does not exist" containerID="bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714198 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58"} err="failed to get container status \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": rpc error: code = NotFound desc = could not find container \"bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58\": container with ID starting with bdcf851c320c7f5cf07a0b510e59a5e3943f16eeebe0dfe952096557b8f15f58 not found: ID does not exist" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714210 4820 scope.go:117] "RemoveContainer" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: E0221 09:32:24.714665 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": container with ID starting with b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4 not found: ID does not exist" containerID="b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4" Feb 21 09:32:24 crc kubenswrapper[4820]: I0221 09:32:24.714687 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4"} err="failed to get container status \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": rpc error: code = NotFound desc = could not find container \"b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4\": container with ID starting with b5fda6a930694181afc24fd6be387f5a101d214314f416b3c6cd624e2079a1f4 not found: ID does not exist" Feb 21 09:32:25 crc kubenswrapper[4820]: I0221 09:32:25.709065 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" path="/var/lib/kubelet/pods/8d56d629-33f3-48af-b7f5-acc9cd1c206c/volumes" Feb 21 09:32:25 crc kubenswrapper[4820]: I0221 09:32:25.858412 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:27 crc kubenswrapper[4820]: I0221 09:32:27.610752 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-llzgs" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" containerID="cri-o://76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" gracePeriod=2 Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.161366 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307059 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.307260 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") pod \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\" (UID: \"e14b6a60-a84b-48e4-8a49-82c31e29a67a\") " Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.308634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities" (OuterVolumeSpecName: "utilities") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.333467 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.409534 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.409566 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e14b6a60-a84b-48e4-8a49-82c31e29a67a-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.654944 4820 generic.go:334] "Generic (PLEG): container finished" podID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" exitCode=0 Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-llzgs" event={"ID":"e14b6a60-a84b-48e4-8a49-82c31e29a67a","Type":"ContainerDied","Data":"fce0bbbe9c0db17ad430e4a8439b4f17e67bbff84360a08e20f939148a9ce9fd"} Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655131 4820 scope.go:117] "RemoveContainer" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.655266 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-llzgs" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.678210 4820 scope.go:117] "RemoveContainer" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.796458 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr" (OuterVolumeSpecName: "kube-api-access-hh2lr") pod "e14b6a60-a84b-48e4-8a49-82c31e29a67a" (UID: "e14b6a60-a84b-48e4-8a49-82c31e29a67a"). InnerVolumeSpecName "kube-api-access-hh2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.817396 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh2lr\" (UniqueName: \"kubernetes.io/projected/e14b6a60-a84b-48e4-8a49-82c31e29a67a-kube-api-access-hh2lr\") on node \"crc\" DevicePath \"\"" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.838373 4820 scope.go:117] "RemoveContainer" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.931606 4820 scope.go:117] "RemoveContainer" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932054 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": container with ID starting with 76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca not found: ID does not exist" containerID="76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932087 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca"} err="failed to get container status \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": rpc error: code = NotFound desc = could not find container \"76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca\": container with ID starting with 76a0b67c5582d4c48afa59e38b1707a608d2f54106650f69b3e841659281a8ca not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932107 4820 scope.go:117] "RemoveContainer" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932384 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": container with ID starting with 7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20 not found: ID does not exist" containerID="7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932430 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20"} err="failed to get container status \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": rpc error: code = NotFound desc = could not find container \"7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20\": container with ID starting with 7446e4840e25d0fbfaf0b54a73a5f5b2ed6cd3aa2768e728591ca67730922e20 not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932455 4820 scope.go:117] "RemoveContainer" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: E0221 09:32:28.932755 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": container with ID starting with 40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266 not found: ID does not exist" containerID="40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.932775 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266"} err="failed to get container status \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": rpc error: code = NotFound desc = could not find container \"40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266\": container with ID starting with 40b1268e635a6cb294144d2a49a69288bb901fd98555660061f41a68b47e7266 not found: ID does not exist" Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.985147 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:28 crc kubenswrapper[4820]: I0221 09:32:28.994744 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-llzgs"] Feb 21 09:32:29 crc kubenswrapper[4820]: I0221 09:32:29.710967 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" path="/var/lib/kubelet/pods/e14b6a60-a84b-48e4-8a49-82c31e29a67a/volumes" Feb 21 09:33:38 crc kubenswrapper[4820]: I0221 09:33:38.295880 4820 scope.go:117] "RemoveContainer" containerID="8a5a16f3ed4eb8c982e9fa5dc44916a814bbff0058422a3d82bec9581623f831" Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.713824 4820 generic.go:334] "Generic (PLEG): container finished" podID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" exitCode=0 Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.713909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" event={"ID":"a1900ff3-6f36-49ff-88d2-898da25c3385","Type":"ContainerDied","Data":"4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f"} Feb 21 09:34:00 crc kubenswrapper[4820]: I0221 09:34:00.714789 4820 scope.go:117] "RemoveContainer" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" Feb 21 09:34:01 crc kubenswrapper[4820]: I0221 09:34:01.539093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/gather/0.log" Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.318505 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.319313 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" containerID="cri-o://5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" gracePeriod=2 Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.330206 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j7d4d/must-gather-qvcwh"] Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.819478 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:10 crc kubenswrapper[4820]: I0221 09:34:10.820122 4820 generic.go:334] "Generic (PLEG): container finished" podID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerID="5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" exitCode=143 Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.318745 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.319303 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.427298 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") pod \"a1900ff3-6f36-49ff-88d2-898da25c3385\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.427492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") pod \"a1900ff3-6f36-49ff-88d2-898da25c3385\" (UID: \"a1900ff3-6f36-49ff-88d2-898da25c3385\") " Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.432912 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg" (OuterVolumeSpecName: "kube-api-access-gq9cg") pod "a1900ff3-6f36-49ff-88d2-898da25c3385" (UID: "a1900ff3-6f36-49ff-88d2-898da25c3385"). InnerVolumeSpecName "kube-api-access-gq9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.529449 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9cg\" (UniqueName: \"kubernetes.io/projected/a1900ff3-6f36-49ff-88d2-898da25c3385-kube-api-access-gq9cg\") on node \"crc\" DevicePath \"\"" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.624930 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a1900ff3-6f36-49ff-88d2-898da25c3385" (UID: "a1900ff3-6f36-49ff-88d2-898da25c3385"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.631418 4820 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1900ff3-6f36-49ff-88d2-898da25c3385-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.712267 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" path="/var/lib/kubelet/pods/a1900ff3-6f36-49ff-88d2-898da25c3385/volumes" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.833549 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j7d4d_must-gather-qvcwh_a1900ff3-6f36-49ff-88d2-898da25c3385/copy/0.log" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.834046 4820 scope.go:117] "RemoveContainer" containerID="5b05abff72f7a6214e6796b0e752b52807cb2023e6916ddd56daecc4e8528351" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.834156 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j7d4d/must-gather-qvcwh" Feb 21 09:34:11 crc kubenswrapper[4820]: I0221 09:34:11.859835 4820 scope.go:117] "RemoveContainer" containerID="4bcf628d58a10209d2b868562aaef36528866f8053f65575b0ffa6aa1295907f" Feb 21 09:34:43 crc kubenswrapper[4820]: I0221 09:34:43.815848 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:34:43 crc kubenswrapper[4820]: I0221 09:34:43.816402 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:13 crc kubenswrapper[4820]: I0221 09:35:13.816632 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:35:13 crc kubenswrapper[4820]: I0221 09:35:13.817369 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.815779 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.816560 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.816625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.817688 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} pod="openshift-machine-config-operator/machine-config-daemon-qth8z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 21 09:35:43 crc kubenswrapper[4820]: I0221 09:35:43.817759 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" containerID="cri-o://bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740" gracePeriod=600 Feb 21 09:35:44 crc kubenswrapper[4820]: E0221 09:35:44.014927 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce38546e_524f_4801_8ee1_b4bb9d6c6dff.slice/crio-conmon-bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740.scope\": RecentStats: unable to find data in memory cache]" Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.847505 4820 generic.go:334] "Generic (PLEG): container finished" podID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerID="bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740" exitCode=0 Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.847611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerDied","Data":"bf473f3ebb1aa62725d356509d5177afb2616e52fd0366893704e3b355182740"} Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.848513 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" event={"ID":"ce38546e-524f-4801-8ee1-b4bb9d6c6dff","Type":"ContainerStarted","Data":"3feb72bbf479b271fbf4fa3fe95eeb9d443bbe28e301842a89237a175e700cac"} Feb 21 09:35:44 crc kubenswrapper[4820]: I0221 09:35:44.848599 4820 scope.go:117] "RemoveContainer" containerID="ca31b8c32c262aa794e6702dd4b8da8bd7a5fd2963091fd054eded608c22fc5c" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.131486 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133393 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133406 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133413 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133432 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133440 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133466 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133474 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133485 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133493 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133509 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133517 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="extract-utilities" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133533 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133541 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="extract-content" Feb 21 09:37:05 crc kubenswrapper[4820]: E0221 09:37:05.133554 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133561 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133811 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="copy" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133825 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1900ff3-6f36-49ff-88d2-898da25c3385" containerName="gather" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133849 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d56d629-33f3-48af-b7f5-acc9cd1c206c" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.133860 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b6a60-a84b-48e4-8a49-82c31e29a67a" containerName="registry-server" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.135650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.167162 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.296885 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.296951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.297170 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.399819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.400499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.400499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.425071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"redhat-operators-rgdbd\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.463764 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:05 crc kubenswrapper[4820]: I0221 09:37:05.910192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.741855 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" exitCode=0 Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.742321 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d"} Feb 21 09:37:06 crc kubenswrapper[4820]: I0221 09:37:06.742384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"aad725d2c0c1a23529c6dda5056ee5ab45cdedb96213744c47c043cffbee6a9f"} Feb 21 09:37:07 crc kubenswrapper[4820]: I0221 09:37:07.752223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} Feb 21 09:37:10 crc kubenswrapper[4820]: I0221 09:37:10.785097 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" exitCode=0 Feb 21 09:37:10 crc kubenswrapper[4820]: I0221 09:37:10.785598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} Feb 21 09:37:11 crc kubenswrapper[4820]: I0221 09:37:11.814215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerStarted","Data":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} Feb 21 09:37:11 crc kubenswrapper[4820]: I0221 09:37:11.844529 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rgdbd" podStartSLOduration=2.338540374 podStartE2EDuration="6.84451379s" podCreationTimestamp="2026-02-21 09:37:05 +0000 UTC" firstStartedPulling="2026-02-21 09:37:06.747794187 +0000 UTC m=+10201.780878385" lastFinishedPulling="2026-02-21 09:37:11.253767593 +0000 UTC m=+10206.286851801" observedRunningTime="2026-02-21 09:37:11.836075 +0000 UTC m=+10206.869159208" watchObservedRunningTime="2026-02-21 09:37:11.84451379 +0000 UTC m=+10206.877597978" Feb 21 09:37:15 crc kubenswrapper[4820]: I0221 09:37:15.464213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:15 crc kubenswrapper[4820]: I0221 09:37:15.465156 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:16 crc kubenswrapper[4820]: I0221 09:37:16.520480 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rgdbd" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" probeResult="failure" output=< Feb 21 09:37:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 21 09:37:16 crc kubenswrapper[4820]: > Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.525857 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.610791 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:25 crc kubenswrapper[4820]: I0221 09:37:25.776186 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:26 crc kubenswrapper[4820]: I0221 09:37:26.984430 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rgdbd" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" containerID="cri-o://a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" gracePeriod=2 Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.494844 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630188 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.630440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") pod \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\" (UID: \"fc01af99-1ad2-4dea-a60d-2b37377ccd46\") " Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.631259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities" (OuterVolumeSpecName: "utilities") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.635424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92" (OuterVolumeSpecName: "kube-api-access-qjp92") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "kube-api-access-qjp92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.733369 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjp92\" (UniqueName: \"kubernetes.io/projected/fc01af99-1ad2-4dea-a60d-2b37377ccd46-kube-api-access-qjp92\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.733405 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.775409 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc01af99-1ad2-4dea-a60d-2b37377ccd46" (UID: "fc01af99-1ad2-4dea-a60d-2b37377ccd46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.835017 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc01af99-1ad2-4dea-a60d-2b37377ccd46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996149 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" exitCode=0 Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rgdbd" event={"ID":"fc01af99-1ad2-4dea-a60d-2b37377ccd46","Type":"ContainerDied","Data":"aad725d2c0c1a23529c6dda5056ee5ab45cdedb96213744c47c043cffbee6a9f"} Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996286 4820 scope.go:117] "RemoveContainer" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:27 crc kubenswrapper[4820]: I0221 09:37:27.996215 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rgdbd" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.021334 4820 scope.go:117] "RemoveContainer" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.039495 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.047602 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rgdbd"] Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.074444 4820 scope.go:117] "RemoveContainer" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.096639 4820 scope.go:117] "RemoveContainer" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.097107 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": container with ID starting with a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447 not found: ID does not exist" containerID="a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097149 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447"} err="failed to get container status \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": rpc error: code = NotFound desc = could not find container \"a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447\": container with ID starting with a18b5261903b4ad5d3b8c92989dca15656c99ab9644ce4bb0dbcb664662f1447 not found: ID does not exist" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097181 4820 scope.go:117] "RemoveContainer" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.097927 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": container with ID starting with 01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130 not found: ID does not exist" containerID="01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097949 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130"} err="failed to get container status \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": rpc error: code = NotFound desc = could not find container \"01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130\": container with ID starting with 01eb0cd1614fc3e99da33f668e8d954021f07a0a6909cfc3bd79cfd226b35130 not found: ID does not exist" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.097966 4820 scope.go:117] "RemoveContainer" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: E0221 09:37:28.098236 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": container with ID starting with 8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d not found: ID does not exist" containerID="8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d" Feb 21 09:37:28 crc kubenswrapper[4820]: I0221 09:37:28.098286 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d"} err="failed to get container status \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": rpc error: code = NotFound desc = could not find container \"8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d\": container with ID starting with 8b3b867293cb85a1c7a5aba507d88e48c7e4d72bd9d03c823ebfe2b7b4b5547d not found: ID does not exist" Feb 21 09:37:29 crc kubenswrapper[4820]: I0221 09:37:29.711489 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" path="/var/lib/kubelet/pods/fc01af99-1ad2-4dea-a60d-2b37377ccd46/volumes" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.527168 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528210 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-content" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528228 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-content" Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528277 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528283 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: E0221 09:37:39.528297 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-utilities" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="extract-utilities" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.528561 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc01af99-1ad2-4dea-a60d-2b37377ccd46" containerName="registry-server" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.530512 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.552632 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.633872 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.633974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.634022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.736539 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.737894 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738013 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.738939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.765583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"community-operators-q2vvg\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:39 crc kubenswrapper[4820]: I0221 09:37:39.873074 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:40 crc kubenswrapper[4820]: I0221 09:37:40.399048 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:40 crc kubenswrapper[4820]: W0221 09:37:40.405593 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb861e2e2_bab2_43e3_ac35_db7964e69058.slice/crio-fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1 WatchSource:0}: Error finding container fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1: Status 404 returned error can't find the container with id fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1 Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.118990 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" exitCode=0 Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.119062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7"} Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.119322 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1"} Feb 21 09:37:41 crc kubenswrapper[4820]: I0221 09:37:41.122160 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 21 09:37:42 crc kubenswrapper[4820]: I0221 09:37:42.130431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} Feb 21 09:37:43 crc kubenswrapper[4820]: I0221 09:37:43.146846 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" exitCode=0 Feb 21 09:37:43 crc kubenswrapper[4820]: I0221 09:37:43.146946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} Feb 21 09:37:44 crc kubenswrapper[4820]: I0221 09:37:44.157725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerStarted","Data":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} Feb 21 09:37:44 crc kubenswrapper[4820]: I0221 09:37:44.187338 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2vvg" podStartSLOduration=2.7691615499999997 podStartE2EDuration="5.187319317s" podCreationTimestamp="2026-02-21 09:37:39 +0000 UTC" firstStartedPulling="2026-02-21 09:37:41.121822957 +0000 UTC m=+10236.154907155" lastFinishedPulling="2026-02-21 09:37:43.539980724 +0000 UTC m=+10238.573064922" observedRunningTime="2026-02-21 09:37:44.181125347 +0000 UTC m=+10239.214209545" watchObservedRunningTime="2026-02-21 09:37:44.187319317 +0000 UTC m=+10239.220403515" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.874291 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.874656 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:49 crc kubenswrapper[4820]: I0221 09:37:49.926747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:50 crc kubenswrapper[4820]: I0221 09:37:50.266806 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:50 crc kubenswrapper[4820]: I0221 09:37:50.311091 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.243611 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2vvg" podUID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerName="registry-server" containerID="cri-o://4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" gracePeriod=2 Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.744801 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.928870 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.939193 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.939673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") pod \"b861e2e2-bab2-43e3-ac35-db7964e69058\" (UID: \"b861e2e2-bab2-43e3-ac35-db7964e69058\") " Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.940428 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities" (OuterVolumeSpecName: "utilities") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.940827 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-utilities\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.946619 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh" (OuterVolumeSpecName: "kube-api-access-57fhh") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "kube-api-access-57fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 21 09:37:52 crc kubenswrapper[4820]: I0221 09:37:52.998090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b861e2e2-bab2-43e3-ac35-db7964e69058" (UID: "b861e2e2-bab2-43e3-ac35-db7964e69058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.043179 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fhh\" (UniqueName: \"kubernetes.io/projected/b861e2e2-bab2-43e3-ac35-db7964e69058-kube-api-access-57fhh\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.043226 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b861e2e2-bab2-43e3-ac35-db7964e69058-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255511 4820 generic.go:334] "Generic (PLEG): container finished" podID="b861e2e2-bab2-43e3-ac35-db7964e69058" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" exitCode=0 Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255621 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2vvg" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2vvg" event={"ID":"b861e2e2-bab2-43e3-ac35-db7964e69058","Type":"ContainerDied","Data":"fabce4ab60996ed1cd1f12b28fa942f0d4e3f94c17a02564aa396320f5e9c6f1"} Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.255693 4820 scope.go:117] "RemoveContainer" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.278599 4820 scope.go:117] "RemoveContainer" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.310051 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.320571 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2vvg"] Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.322473 4820 scope.go:117] "RemoveContainer" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354096 4820 scope.go:117] "RemoveContainer" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.354489 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": container with ID starting with 4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855 not found: ID does not exist" containerID="4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354528 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855"} err="failed to get container status \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": rpc error: code = NotFound desc = could not find container \"4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855\": container with ID starting with 4d177269d707ddec90e779d27a5cb00b3033b5528c48a832f634810f9b0f1855 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354568 4820 scope.go:117] "RemoveContainer" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.354857 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": container with ID starting with a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86 not found: ID does not exist" containerID="a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354906 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86"} err="failed to get container status \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": rpc error: code = NotFound desc = could not find container \"a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86\": container with ID starting with a09f6daa1cc40a6233fe6dc20c158cf95f3ddbf1369d60329d9a8199328f3c86 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.354939 4820 scope.go:117] "RemoveContainer" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: E0221 09:37:53.355411 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": container with ID starting with dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7 not found: ID does not exist" containerID="dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.355475 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7"} err="failed to get container status \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": rpc error: code = NotFound desc = could not find container \"dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7\": container with ID starting with dd6ec7b7343d40131c6c503b260b575ec6f4d41453d49104e98d3ccd7f7304a7 not found: ID does not exist" Feb 21 09:37:53 crc kubenswrapper[4820]: I0221 09:37:53.713324 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b861e2e2-bab2-43e3-ac35-db7964e69058" path="/var/lib/kubelet/pods/b861e2e2-bab2-43e3-ac35-db7964e69058/volumes" Feb 21 09:38:13 crc kubenswrapper[4820]: I0221 09:38:13.815921 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:38:13 crc kubenswrapper[4820]: I0221 09:38:13.816398 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 21 09:38:43 crc kubenswrapper[4820]: I0221 09:38:43.815931 4820 patch_prober.go:28] interesting pod/machine-config-daemon-qth8z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 21 09:38:43 crc kubenswrapper[4820]: I0221 09:38:43.817384 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qth8z" podUID="ce38546e-524f-4801-8ee1-b4bb9d6c6dff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515146276512024456 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015146276512017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015146252020016502 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015146252020015452 5ustar corecore